Test Report: Docker_Linux_crio_arm64 21968

                    
                      c47dc458d63a230593369798adacaa3ab200078c:2025-11-23:42467
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.34
35 TestAddons/parallel/Registry 16.26
36 TestAddons/parallel/RegistryCreds 0.48
37 TestAddons/parallel/Ingress 143.25
38 TestAddons/parallel/InspektorGadget 5.32
39 TestAddons/parallel/MetricsServer 5.4
41 TestAddons/parallel/CSI 62.71
42 TestAddons/parallel/Headlamp 3.19
43 TestAddons/parallel/CloudSpanner 6.3
44 TestAddons/parallel/LocalPath 8.4
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 6.28
97 TestFunctional/parallel/ServiceCmdConnect 603.55
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.92
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
135 TestFunctional/parallel/ServiceCmd/Format 0.53
136 TestFunctional/parallel/ServiceCmd/URL 0.57
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.26
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.28
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
191 TestJSONOutput/pause/Command 2.46
197 TestJSONOutput/unpause/Command 2.01
293 TestPause/serial/Pause 7.32
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.8
304 TestStartStop/group/old-k8s-version/serial/Pause 7.38
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.75
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.13
322 TestStartStop/group/no-preload/serial/Pause 6.39
328 TestStartStop/group/embed-certs/serial/Pause 7.8
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.55
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 4.14
342 TestStartStop/group/newest-cni/serial/Pause 6.45
351 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.6
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable volcano --alsologtostderr -v=1: exit status 11 (339.309946ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:19:38.134939  548574 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:19:38.136551  548574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:38.136614  548574 out.go:374] Setting ErrFile to fd 2...
	I1123 10:19:38.136635  548574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:38.136968  548574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:19:38.137316  548574 mustload.go:66] Loading cluster: addons-832672
	I1123 10:19:38.137785  548574 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:38.137826  548574 addons.go:622] checking whether the cluster is paused
	I1123 10:19:38.137980  548574 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:38.138013  548574 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:19:38.138585  548574 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:19:38.169323  548574 ssh_runner.go:195] Run: systemctl --version
	I1123 10:19:38.169377  548574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:19:38.187915  548574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:19:38.299784  548574 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:19:38.299882  548574 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:19:38.334814  548574 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:19:38.334837  548574 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:19:38.334842  548574 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:19:38.334850  548574 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:19:38.334853  548574 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:19:38.334857  548574 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:19:38.334860  548574 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:19:38.334863  548574 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:19:38.334866  548574 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:19:38.334873  548574 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:19:38.334876  548574 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:19:38.334880  548574 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:19:38.334882  548574 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:19:38.334885  548574 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:19:38.334888  548574 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:19:38.334893  548574 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:19:38.334896  548574 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:19:38.334900  548574 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:19:38.334903  548574 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:19:38.334905  548574 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:19:38.334910  548574 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:19:38.334913  548574 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:19:38.334916  548574 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:19:38.334918  548574 cri.go:89] found id: ""
	I1123 10:19:38.334969  548574 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:38.358765  548574 out.go:203] 
	W1123 10:19:38.361806  548574 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:19:38.361836  548574 out.go:285] * 
	* 
	W1123 10:19:38.369150  548574 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:19:38.372233  548574 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.748493ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003363619s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004037105s
addons_test.go:392: (dbg) Run:  kubectl --context addons-832672 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-832672 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-832672 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.746741113s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 ip
2025/11/23 10:20:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable registry --alsologtostderr -v=1: exit status 11 (256.549037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:04.667912  549523 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:04.668726  549523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:04.668766  549523 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:04.668790  549523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:04.669213  549523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:20:04.670373  549523 mustload.go:66] Loading cluster: addons-832672
	I1123 10:20:04.670787  549523 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:04.670805  549523 addons.go:622] checking whether the cluster is paused
	I1123 10:20:04.670914  549523 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:04.670929  549523 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:20:04.671456  549523 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:20:04.689641  549523 ssh_runner.go:195] Run: systemctl --version
	I1123 10:20:04.689709  549523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:20:04.707001  549523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:20:04.811904  549523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:20:04.811993  549523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:20:04.843218  549523 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:20:04.843243  549523 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:20:04.843250  549523 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:20:04.843253  549523 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:20:04.843257  549523 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:20:04.843260  549523 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:20:04.843263  549523 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:20:04.843267  549523 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:20:04.843270  549523 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:20:04.843276  549523 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:20:04.843299  549523 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:20:04.843305  549523 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:20:04.843308  549523 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:20:04.843311  549523 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:20:04.843315  549523 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:20:04.843320  549523 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:20:04.843329  549523 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:20:04.843333  549523 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:20:04.843337  549523 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:20:04.843340  549523 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:20:04.843345  549523 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:20:04.843354  549523 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:20:04.843357  549523 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:20:04.843360  549523 cri.go:89] found id: ""
	I1123 10:20:04.843426  549523 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:20:04.858211  549523 out.go:203] 
	W1123 10:20:04.861161  549523 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:20:04.861191  549523 out.go:285] * 
	* 
	W1123 10:20:04.868485  549523 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:20:04.871515  549523 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.26s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.948039ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-832672
addons_test.go:332: (dbg) Run:  kubectl --context addons-832672 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (262.609097ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:54.778134  550749 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:54.778897  550749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:54.778931  550749 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:54.778955  550749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:54.779734  550749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:20:54.780105  550749 mustload.go:66] Loading cluster: addons-832672
	I1123 10:20:54.780575  550749 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:54.780616  550749 addons.go:622] checking whether the cluster is paused
	I1123 10:20:54.780770  550749 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:54.780803  550749 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:20:54.781361  550749 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:20:54.798458  550749 ssh_runner.go:195] Run: systemctl --version
	I1123 10:20:54.798514  550749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:20:54.817351  550749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:20:54.924470  550749 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:20:54.924558  550749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:20:54.958557  550749 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:20:54.958577  550749 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:20:54.958583  550749 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:20:54.958611  550749 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:20:54.958618  550749 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:20:54.958623  550749 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:20:54.958626  550749 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:20:54.958630  550749 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:20:54.958638  550749 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:20:54.958645  550749 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:20:54.958648  550749 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:20:54.958668  550749 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:20:54.958671  550749 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:20:54.958675  550749 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:20:54.958694  550749 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:20:54.958723  550749 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:20:54.958735  550749 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:20:54.958740  550749 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:20:54.958743  550749 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:20:54.958747  550749 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:20:54.958752  550749 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:20:54.958755  550749 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:20:54.958759  550749 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:20:54.958761  550749 cri.go:89] found id: ""
	I1123 10:20:54.958831  550749 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:20:54.974197  550749 out.go:203] 
	W1123 10:20:54.977125  550749 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:20:54.977152  550749 out.go:285] * 
	* 
	W1123 10:20:54.984474  550749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:20:54.987474  550749 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-832672 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-832672 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-832672 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4e6186fd-c87b-41ef-a191-4cb5a359ada1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4e6186fd-c87b-41ef-a191-4cb5a359ada1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003502434s
I1123 10:20:24.262451  541900 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.501423497s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-832672 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-832672
helpers_test.go:243: (dbg) docker inspect addons-832672:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2",
	        "Created": "2025-11-23T10:17:20.139779283Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543077,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:20.198692713Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/hosts",
	        "LogPath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2-json.log",
	        "Name": "/addons-832672",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-832672:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-832672",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2",
	                "LowerDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-832672",
	                "Source": "/var/lib/docker/volumes/addons-832672/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-832672",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-832672",
	                "name.minikube.sigs.k8s.io": "addons-832672",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ac81a0ab4c67821052f538558acd818d27ad7628f4ba5d58d6456ceab807b45",
	            "SandboxKey": "/var/run/docker/netns/2ac81a0ab4c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-832672": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:2d:9c:bd:78:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39f142dc2e0330f3f717f783158c0e1012182cbdd04b57850dea4f941ef1a75a",
	                    "EndpointID": "a29433a4ffeb1b5ba94266313b3744451b16f65f3ca3f5e591814d37cec482de",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-832672",
	                        "3d8dabe9a410"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-832672 -n addons-832672
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-832672 logs -n 25: (1.546037245s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-549884                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-549884 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ --download-only -p binary-mirror-279599 --alsologtostderr --binary-mirror http://127.0.0.1:42529 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-279599   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ delete  │ -p binary-mirror-279599                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-279599   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ addons  │ enable dashboard -p addons-832672                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ addons  │ disable dashboard -p addons-832672                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ start   │ -p addons-832672 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:19 UTC │
	│ addons  │ addons-832672 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ addons  │ addons-832672 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-832672 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ addons  │ addons-832672 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ ip      │ addons-832672 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │ 23 Nov 25 10:20 UTC │
	│ addons  │ addons-832672 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │                     │
	│ addons  │ addons-832672 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │                     │
	│ addons  │ addons-832672 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │                     │
	│ ssh     │ addons-832672 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │                     │
	│ addons  │ addons-832672 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │                     │
	│ addons  │ addons-832672 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-832672                                                                                                                                                                                                                                                                                                                                                                                           │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │ 23 Nov 25 10:20 UTC │
	│ addons  │ addons-832672 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:20 UTC │                     │
	│ addons  │ addons-832672 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:21 UTC │                     │
	│ addons  │ addons-832672 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:21 UTC │                     │
	│ ssh     │ addons-832672 ssh cat /opt/local-path-provisioner/pvc-158fdf5f-6f36-438b-8fb9-88aab27655a3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:21 UTC │ 23 Nov 25 10:21 UTC │
	│ addons  │ addons-832672 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:21 UTC │                     │
	│ addons  │ addons-832672 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:21 UTC │                     │
	│ ip      │ addons-832672 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:22 UTC │ 23 Nov 25 10:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:16:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:16:54.363486  542668 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:54.363629  542668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:54.363641  542668 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:54.363646  542668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:54.364008  542668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:16:54.364813  542668 out.go:368] Setting JSON to false
	I1123 10:16:54.365700  542668 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10763,"bootTime":1763882251,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:16:54.365775  542668 start.go:143] virtualization:  
	I1123 10:16:54.369248  542668 out.go:179] * [addons-832672] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:16:54.372992  542668 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:16:54.373124  542668 notify.go:221] Checking for updates...
	I1123 10:16:54.379330  542668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:16:54.382193  542668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:16:54.385120  542668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 10:16:54.388011  542668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:16:54.390873  542668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:16:54.393899  542668 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:16:54.429288  542668 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:16:54.429436  542668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:54.492185  542668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 10:16:54.482756046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:54.492303  542668 docker.go:319] overlay module found
	I1123 10:16:54.495485  542668 out.go:179] * Using the docker driver based on user configuration
	I1123 10:16:54.498197  542668 start.go:309] selected driver: docker
	I1123 10:16:54.498216  542668 start.go:927] validating driver "docker" against <nil>
	I1123 10:16:54.498230  542668 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:16:54.498954  542668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:54.550202  542668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 10:16:54.541624639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:54.550387  542668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:16:54.550610  542668 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:54.553324  542668 out.go:179] * Using Docker driver with root privileges
	I1123 10:16:54.556037  542668 cni.go:84] Creating CNI manager for ""
	I1123 10:16:54.556114  542668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:54.556128  542668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:16:54.556205  542668 start.go:353] cluster config:
	{Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 10:16:54.559233  542668 out.go:179] * Starting "addons-832672" primary control-plane node in "addons-832672" cluster
	I1123 10:16:54.562058  542668 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:16:54.564914  542668 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:16:54.567683  542668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:54.567731  542668 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:16:54.567743  542668 cache.go:65] Caching tarball of preloaded images
	I1123 10:16:54.567751  542668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:16:54.567844  542668 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:16:54.567855  542668 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:16:54.568192  542668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/config.json ...
	I1123 10:16:54.568222  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/config.json: {Name:mk91c43859c1618dd2f2f8557f3936708ed084f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:54.583683  542668 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 10:16:54.583827  542668 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 10:16:54.583846  542668 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 10:16:54.583851  542668 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 10:16:54.583858  542668 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 10:16:54.583863  542668 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 10:17:13.225834  542668 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 10:17:13.225870  542668 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:17:13.225912  542668 start.go:360] acquireMachinesLock for addons-832672: {Name:mkc984d0fcfcecd7b88c6de76ca17d111bad3a06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:17:13.226031  542668 start.go:364] duration metric: took 97.929µs to acquireMachinesLock for "addons-832672"
	I1123 10:17:13.226058  542668 start.go:93] Provisioning new machine with config: &{Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:13.226131  542668 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:17:13.229562  542668 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 10:17:13.229810  542668 start.go:159] libmachine.API.Create for "addons-832672" (driver="docker")
	I1123 10:17:13.229846  542668 client.go:173] LocalClient.Create starting
	I1123 10:17:13.229970  542668 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 10:17:13.335853  542668 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 10:17:13.793308  542668 cli_runner.go:164] Run: docker network inspect addons-832672 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:17:13.809247  542668 cli_runner.go:211] docker network inspect addons-832672 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:17:13.809338  542668 network_create.go:284] running [docker network inspect addons-832672] to gather additional debugging logs...
	I1123 10:17:13.809359  542668 cli_runner.go:164] Run: docker network inspect addons-832672
	W1123 10:17:13.825324  542668 cli_runner.go:211] docker network inspect addons-832672 returned with exit code 1
	I1123 10:17:13.825357  542668 network_create.go:287] error running [docker network inspect addons-832672]: docker network inspect addons-832672: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-832672 not found
	I1123 10:17:13.825371  542668 network_create.go:289] output of [docker network inspect addons-832672]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-832672 not found
	
	** /stderr **
	I1123 10:17:13.825515  542668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:13.841522  542668 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ebdb0}
	I1123 10:17:13.841566  542668 network_create.go:124] attempt to create docker network addons-832672 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 10:17:13.841626  542668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-832672 addons-832672
	I1123 10:17:13.903373  542668 network_create.go:108] docker network addons-832672 192.168.49.0/24 created
	I1123 10:17:13.903407  542668 kic.go:121] calculated static IP "192.168.49.2" for the "addons-832672" container
	I1123 10:17:13.903496  542668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:17:13.918266  542668 cli_runner.go:164] Run: docker volume create addons-832672 --label name.minikube.sigs.k8s.io=addons-832672 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:17:13.935722  542668 oci.go:103] Successfully created a docker volume addons-832672
	I1123 10:17:13.935818  542668 cli_runner.go:164] Run: docker run --rm --name addons-832672-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-832672 --entrypoint /usr/bin/test -v addons-832672:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:17:15.672992  542668 cli_runner.go:217] Completed: docker run --rm --name addons-832672-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-832672 --entrypoint /usr/bin/test -v addons-832672:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.737135065s)
	I1123 10:17:15.673025  542668 oci.go:107] Successfully prepared a docker volume addons-832672
	I1123 10:17:15.673074  542668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:15.673090  542668 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:17:15.673161  542668 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-832672:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:17:20.069440  542668 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-832672:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.396201507s)
	I1123 10:17:20.069472  542668 kic.go:203] duration metric: took 4.396379356s to extract preloaded images to volume ...
	W1123 10:17:20.069624  542668 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:17:20.069737  542668 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:17:20.125273  542668 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-832672 --name addons-832672 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-832672 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-832672 --network addons-832672 --ip 192.168.49.2 --volume addons-832672:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:17:20.418074  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Running}}
	I1123 10:17:20.444024  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:20.470440  542668 cli_runner.go:164] Run: docker exec addons-832672 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:17:20.538097  542668 oci.go:144] the created container "addons-832672" has a running status.
	I1123 10:17:20.538124  542668 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa...
	I1123 10:17:20.829235  542668 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:17:20.852314  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:20.875514  542668 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:17:20.875535  542668 kic_runner.go:114] Args: [docker exec --privileged addons-832672 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:17:20.949228  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:20.969584  542668 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:20.969681  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:20.991172  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:20.991482  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:20.991491  542668 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:20.993146  542668 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:17:24.153197  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-832672
	
	I1123 10:17:24.153222  542668 ubuntu.go:182] provisioning hostname "addons-832672"
	I1123 10:17:24.153301  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.171478  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.171804  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:24.171820  542668 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-832672 && echo "addons-832672" | sudo tee /etc/hostname
	I1123 10:17:24.331120  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-832672
	
	I1123 10:17:24.331203  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.350811  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.351140  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:24.351163  542668 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-832672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-832672/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-832672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:24.505929  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:24.505960  542668 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 10:17:24.505988  542668 ubuntu.go:190] setting up certificates
	I1123 10:17:24.505998  542668 provision.go:84] configureAuth start
	I1123 10:17:24.506065  542668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-832672
	I1123 10:17:24.523675  542668 provision.go:143] copyHostCerts
	I1123 10:17:24.523769  542668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 10:17:24.523894  542668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 10:17:24.523962  542668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 10:17:24.524015  542668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.addons-832672 san=[127.0.0.1 192.168.49.2 addons-832672 localhost minikube]
	I1123 10:17:24.659299  542668 provision.go:177] copyRemoteCerts
	I1123 10:17:24.659370  542668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:24.659415  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.681787  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:24.785152  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:17:24.803752  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:24.821501  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 10:17:24.839244  542668 provision.go:87] duration metric: took 333.216334ms to configureAuth
	I1123 10:17:24.839273  542668 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:24.839475  542668 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:24.839591  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.857708  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.858013  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:24.858031  542668 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:25.157246  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:25.157266  542668 machine.go:97] duration metric: took 4.187664851s to provisionDockerMachine
	I1123 10:17:25.157276  542668 client.go:176] duration metric: took 11.927419376s to LocalClient.Create
	I1123 10:17:25.157296  542668 start.go:167] duration metric: took 11.927487413s to libmachine.API.Create "addons-832672"
	I1123 10:17:25.157303  542668 start.go:293] postStartSetup for "addons-832672" (driver="docker")
	I1123 10:17:25.157313  542668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:25.157374  542668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:25.157440  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.175535  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.281712  542668 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:25.285361  542668 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:25.285388  542668 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:25.285400  542668 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 10:17:25.285490  542668 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 10:17:25.285518  542668 start.go:296] duration metric: took 128.209687ms for postStartSetup
	I1123 10:17:25.285837  542668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-832672
	I1123 10:17:25.302852  542668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/config.json ...
	I1123 10:17:25.303144  542668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:25.303206  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.320008  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.422340  542668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:25.427044  542668 start.go:128] duration metric: took 12.200898307s to createHost
	I1123 10:17:25.427072  542668 start.go:83] releasing machines lock for "addons-832672", held for 12.201031692s
	I1123 10:17:25.427144  542668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-832672
	I1123 10:17:25.444345  542668 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:25.444402  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.444410  542668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:25.444476  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.469044  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.477787  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.577234  542668 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:25.671815  542668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:25.708506  542668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:25.713453  542668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:25.713555  542668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:25.741871  542668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:17:25.741898  542668 start.go:496] detecting cgroup driver to use...
	I1123 10:17:25.741931  542668 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:17:25.741988  542668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:25.759835  542668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:25.771979  542668 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:25.772043  542668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:25.789006  542668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:25.806801  542668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:25.929731  542668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:26.054348  542668 docker.go:234] disabling docker service ...
	I1123 10:17:26.054476  542668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:26.078246  542668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:26.091971  542668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:26.213478  542668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:26.331479  542668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:26.343886  542668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:26.357910  542668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:26.357977  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.367058  542668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:17:26.367201  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.375945  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.384387  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.392873  542668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:26.400822  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.409373  542668 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.423547  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.432702  542668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:26.441223  542668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:26.448822  542668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:26.560456  542668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:26.740523  542668 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:26.740615  542668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:26.744250  542668 start.go:564] Will wait 60s for crictl version
	I1123 10:17:26.744361  542668 ssh_runner.go:195] Run: which crictl
	I1123 10:17:26.747778  542668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:26.775219  542668 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:26.775410  542668 ssh_runner.go:195] Run: crio --version
	I1123 10:17:26.805965  542668 ssh_runner.go:195] Run: crio --version
	I1123 10:17:26.835781  542668 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:26.838614  542668 cli_runner.go:164] Run: docker network inspect addons-832672 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:26.858425  542668 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:26.862338  542668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:26.872345  542668 kubeadm.go:884] updating cluster {Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:26.872473  542668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:26.872537  542668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:26.908958  542668 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:26.908984  542668 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:26.909041  542668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:26.934054  542668 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:26.934078  542668 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:26.934088  542668 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:26.934181  542668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-832672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:26.934298  542668 ssh_runner.go:195] Run: crio config
	I1123 10:17:27.004935  542668 cni.go:84] Creating CNI manager for ""
	I1123 10:17:27.004956  542668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:27.004980  542668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:27.005016  542668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-832672 NodeName:addons-832672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:27.005180  542668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-832672"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:27.005274  542668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:27.015515  542668 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:27.015608  542668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:27.023706  542668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 10:17:27.036964  542668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:27.050072  542668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1123 10:17:27.063584  542668 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:27.067384  542668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:27.077284  542668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:27.189368  542668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:27.210093  542668 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672 for IP: 192.168.49.2
	I1123 10:17:27.210168  542668 certs.go:195] generating shared ca certs ...
	I1123 10:17:27.210203  542668 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.210389  542668 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 10:17:27.613414  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt ...
	I1123 10:17:27.613455  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt: {Name:mke30750f9c6ff0fde60b494542df07664fb1b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.613668  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key ...
	I1123 10:17:27.613680  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key: {Name:mkeefc63f05e517f4e56dec8685a29e5c333b1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.613766  542668 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 10:17:27.678213  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt ...
	I1123 10:17:27.678243  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt: {Name:mk910064634b90b3a357667f6d1c2c6ae9d2cbfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.678398  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key ...
	I1123 10:17:27.678418  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key: {Name:mk70ed3d2d9f99deb614a9f3da65b3eec4847bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.678499  542668 certs.go:257] generating profile certs ...
	I1123 10:17:27.678565  542668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.key
	I1123 10:17:27.678582  542668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt with IP's: []
	I1123 10:17:28.005755  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt ...
	I1123 10:17:28.005793  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: {Name:mk3212e233b345c80c7f5646a85d42fdb80def6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.006022  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.key ...
	I1123 10:17:28.006037  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.key: {Name:mk0d7a15230898871fde659685152c722e0134c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.006139  542668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb
	I1123 10:17:28.006162  542668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 10:17:28.495275  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb ...
	I1123 10:17:28.495310  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb: {Name:mk9c0218d8e1f341f93e84ebbed51df17ccf7c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.495500  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb ...
	I1123 10:17:28.495516  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb: {Name:mk804a0fcba3f7fe04e482e4cb9dad1ad68d5685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.495605  542668 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt
	I1123 10:17:28.495688  542668 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key
	I1123 10:17:28.495740  542668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key
	I1123 10:17:28.495761  542668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt with IP's: []
	I1123 10:17:28.739580  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt ...
	I1123 10:17:28.739612  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt: {Name:mk558cf10d22ce15b0080591ed282b80c13bbdd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.739790  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key ...
	I1123 10:17:28.739805  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key: {Name:mke5e2e9b48e9ee0c18861eaf2ee14facbbf43fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.740003  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:17:28.740049  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:28.740082  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:28.740146  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:28.740726  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:28.759805  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:28.778320  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:28.795713  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:17:28.813217  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 10:17:28.831318  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:17:28.850313  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:28.871669  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:28.893294  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:28.912683  542668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:28.925780  542668 ssh_runner.go:195] Run: openssl version
	I1123 10:17:28.931919  542668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:28.940566  542668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:28.944342  542668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:28.944414  542668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:28.987766  542668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:28.996130  542668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:28.999576  542668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:17:28.999626  542668 kubeadm.go:401] StartCluster: {Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:28.999705  542668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:28.999770  542668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:29.027671  542668 cri.go:89] found id: ""
	I1123 10:17:29.027747  542668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:29.035645  542668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:17:29.043264  542668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:17:29.043377  542668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:17:29.050983  542668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:17:29.051002  542668 kubeadm.go:158] found existing configuration files:
	
	I1123 10:17:29.051050  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:17:29.058665  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:17:29.058735  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:17:29.065830  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:17:29.073325  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:17:29.073394  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:17:29.080913  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:17:29.088450  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:17:29.088515  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:17:29.095639  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:17:29.103086  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:17:29.103151  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:17:29.110563  542668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:17:29.151250  542668 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:17:29.151477  542668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:17:29.173506  542668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:17:29.173585  542668 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:17:29.173623  542668 kubeadm.go:319] OS: Linux
	I1123 10:17:29.173674  542668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:17:29.173727  542668 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:17:29.173778  542668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:17:29.173830  542668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:17:29.173901  542668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:17:29.173956  542668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:17:29.174015  542668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:17:29.174065  542668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:17:29.174115  542668 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:17:29.246225  542668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:17:29.246429  542668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:17:29.246546  542668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:17:29.253991  542668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:17:29.261028  542668 out.go:252]   - Generating certificates and keys ...
	I1123 10:17:29.261195  542668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:17:29.261300  542668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:17:29.747544  542668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:17:30.247214  542668 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:17:30.735964  542668 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:17:31.857438  542668 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:17:32.166373  542668 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:17:32.166890  542668 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-832672 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 10:17:32.680842  542668 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:17:32.681307  542668 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-832672 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 10:17:33.340281  542668 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:17:34.183295  542668 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:17:34.360724  542668 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:17:34.361067  542668 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:17:34.531519  542668 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:17:35.266773  542668 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:17:36.116615  542668 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:17:37.310420  542668 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:17:37.512143  542668 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:17:37.512798  542668 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:17:37.515612  542668 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:17:37.519223  542668 out.go:252]   - Booting up control plane ...
	I1123 10:17:37.519337  542668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:17:37.519422  542668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:17:37.519497  542668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:17:37.534025  542668 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:17:37.534345  542668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:17:37.543651  542668 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:17:37.543966  542668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:17:37.544019  542668 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:17:37.669504  542668 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:17:37.669629  542668 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:17:39.671973  542668 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001959236s
	I1123 10:17:39.675513  542668 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:17:39.675919  542668 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 10:17:39.676243  542668 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:17:39.676932  542668 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:17:42.348897  542668 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.671569154s
	I1123 10:17:44.193714  542668 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.516204487s
	I1123 10:17:45.678221  542668 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001553694s
	I1123 10:17:45.699613  542668 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:17:45.715340  542668 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:17:45.731186  542668 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:17:45.731457  542668 kubeadm.go:319] [mark-control-plane] Marking the node addons-832672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:17:45.743038  542668 kubeadm.go:319] [bootstrap-token] Using token: 8jeqce.9gmif7n048bp2h39
	I1123 10:17:45.746427  542668 out.go:252]   - Configuring RBAC rules ...
	I1123 10:17:45.746573  542668 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:17:45.752797  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:17:45.761578  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:17:45.765455  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:17:45.769267  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:17:45.773842  542668 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:17:46.085729  542668 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:17:46.517158  542668 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:17:47.085205  542668 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:17:47.086459  542668 kubeadm.go:319] 
	I1123 10:17:47.086532  542668 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:17:47.086537  542668 kubeadm.go:319] 
	I1123 10:17:47.086616  542668 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:17:47.086620  542668 kubeadm.go:319] 
	I1123 10:17:47.086646  542668 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:17:47.086704  542668 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:17:47.086754  542668 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:17:47.086758  542668 kubeadm.go:319] 
	I1123 10:17:47.086824  542668 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:17:47.086828  542668 kubeadm.go:319] 
	I1123 10:17:47.086876  542668 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:17:47.086879  542668 kubeadm.go:319] 
	I1123 10:17:47.086940  542668 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:17:47.087016  542668 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:17:47.087090  542668 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:17:47.087094  542668 kubeadm.go:319] 
	I1123 10:17:47.087178  542668 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:17:47.087255  542668 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:17:47.087259  542668 kubeadm.go:319] 
	I1123 10:17:47.087344  542668 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8jeqce.9gmif7n048bp2h39 \
	I1123 10:17:47.087447  542668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 10:17:47.087467  542668 kubeadm.go:319] 	--control-plane 
	I1123 10:17:47.087472  542668 kubeadm.go:319] 
	I1123 10:17:47.087556  542668 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:17:47.087560  542668 kubeadm.go:319] 
	I1123 10:17:47.087641  542668 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8jeqce.9gmif7n048bp2h39 \
	I1123 10:17:47.087744  542668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 10:17:47.090076  542668 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:17:47.090294  542668 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:17:47.090396  542668 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:17:47.090430  542668 cni.go:84] Creating CNI manager for ""
	I1123 10:17:47.090444  542668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:47.093715  542668 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:17:47.096490  542668 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:17:47.100338  542668 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:17:47.100362  542668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:17:47.112756  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:17:47.394220  542668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:17:47.394354  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:47.394434  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-832672 minikube.k8s.io/updated_at=2025_11_23T10_17_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=addons-832672 minikube.k8s.io/primary=true
	I1123 10:17:47.573613  542668 ops.go:34] apiserver oom_adj: -16
	I1123 10:17:47.573730  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:48.074627  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:48.574150  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:49.073800  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:49.574845  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:50.073907  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:50.574808  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:51.074497  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:51.161873  542668 kubeadm.go:1114] duration metric: took 3.76756513s to wait for elevateKubeSystemPrivileges
	I1123 10:17:51.161909  542668 kubeadm.go:403] duration metric: took 22.16228594s to StartCluster
	I1123 10:17:51.161927  542668 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:51.162050  542668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:17:51.162424  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:51.162639  542668 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:51.162781  542668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:17:51.163054  542668 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:51.163099  542668 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 10:17:51.163185  542668 addons.go:70] Setting yakd=true in profile "addons-832672"
	I1123 10:17:51.163205  542668 addons.go:239] Setting addon yakd=true in "addons-832672"
	I1123 10:17:51.163235  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.163747  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164449  542668 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-832672"
	I1123 10:17:51.164465  542668 addons.go:70] Setting cloud-spanner=true in profile "addons-832672"
	I1123 10:17:51.164478  542668 addons.go:70] Setting registry=true in profile "addons-832672"
	I1123 10:17:51.164484  542668 addons.go:239] Setting addon cloud-spanner=true in "addons-832672"
	I1123 10:17:51.164488  542668 addons.go:239] Setting addon registry=true in "addons-832672"
	I1123 10:17:51.164511  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.164518  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.164949  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164950  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164454  542668 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-832672"
	I1123 10:17:51.167479  542668 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-832672"
	I1123 10:17:51.167508  542668 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-832672"
	I1123 10:17:51.167530  542668 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-832672"
	I1123 10:17:51.167589  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.167850  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.169303  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164470  542668 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-832672"
	I1123 10:17:51.172216  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.172787  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.183694  542668 addons.go:70] Setting volcano=true in profile "addons-832672"
	I1123 10:17:51.183773  542668 addons.go:239] Setting addon volcano=true in "addons-832672"
	I1123 10:17:51.183825  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.170236  542668 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-832672"
	I1123 10:17:51.184321  542668 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-832672"
	I1123 10:17:51.184343  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.184861  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.185184  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.194446  542668 addons.go:70] Setting volumesnapshots=true in profile "addons-832672"
	I1123 10:17:51.194537  542668 addons.go:239] Setting addon volumesnapshots=true in "addons-832672"
	I1123 10:17:51.194588  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.195104  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.167469  542668 addons.go:70] Setting storage-provisioner=true in profile "addons-832672"
	I1123 10:17:51.196676  542668 addons.go:239] Setting addon storage-provisioner=true in "addons-832672"
	I1123 10:17:51.196741  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.170259  542668 addons.go:70] Setting default-storageclass=true in profile "addons-832672"
	I1123 10:17:51.196848  542668 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-832672"
	I1123 10:17:51.197258  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.170266  542668 addons.go:70] Setting gcp-auth=true in profile "addons-832672"
	I1123 10:17:51.252711  542668 mustload.go:66] Loading cluster: addons-832672
	I1123 10:17:51.252985  542668 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:51.253317  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.170273  542668 addons.go:70] Setting ingress=true in profile "addons-832672"
	I1123 10:17:51.270221  542668 addons.go:239] Setting addon ingress=true in "addons-832672"
	I1123 10:17:51.270349  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.271043  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.276217  542668 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 10:17:51.285501  542668 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 10:17:51.285571  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 10:17:51.285676  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.294398  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 10:17:51.170280  542668 addons.go:70] Setting ingress-dns=true in profile "addons-832672"
	I1123 10:17:51.295645  542668 addons.go:239] Setting addon ingress-dns=true in "addons-832672"
	I1123 10:17:51.295695  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.296189  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.300042  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 10:17:51.303040  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 10:17:51.305894  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 10:17:51.308843  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 10:17:51.308954  542668 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 10:17:51.311651  542668 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 10:17:51.311674  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 10:17:51.311741  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.170286  542668 addons.go:70] Setting inspektor-gadget=true in profile "addons-832672"
	I1123 10:17:51.317351  542668 addons.go:239] Setting addon inspektor-gadget=true in "addons-832672"
	I1123 10:17:51.317433  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.317898  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.170292  542668 addons.go:70] Setting metrics-server=true in profile "addons-832672"
	I1123 10:17:51.328614  542668 addons.go:239] Setting addon metrics-server=true in "addons-832672"
	I1123 10:17:51.328660  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.329116  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.349911  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.167454  542668 addons.go:70] Setting registry-creds=true in profile "addons-832672"
	I1123 10:17:51.359647  542668 addons.go:239] Setting addon registry-creds=true in "addons-832672"
	I1123 10:17:51.170331  542668 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:51.252375  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.362710  542668 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-832672"
	I1123 10:17:51.362867  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.363321  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.369541  542668 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 10:17:51.369771  542668 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 10:17:51.373056  542668 addons.go:239] Setting addon default-storageclass=true in "addons-832672"
	I1123 10:17:51.373093  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.373766  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.377368  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.377853  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.384633  542668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:51.384821  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 10:17:51.388415  542668 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 10:17:51.388609  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.390094  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 10:17:51.390110  542668 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 10:17:51.390163  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	W1123 10:17:51.404364  542668 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 10:17:51.409060  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 10:17:51.411192  542668 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 10:17:51.411215  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 10:17:51.411278  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.411903  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 10:17:51.424883  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 10:17:51.427804  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 10:17:51.427836  542668 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 10:17:51.427917  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.430898  542668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:17:51.450321  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 10:17:51.450516  542668 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 10:17:51.513719  542668 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 10:17:51.498644  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.514667  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 10:17:51.524312  542668 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 10:17:51.532165  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 10:17:51.533606  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.544421  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 10:17:51.544442  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 10:17:51.544520  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.563843  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 10:17:51.566879  542668 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 10:17:51.566900  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 10:17:51.566963  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.575192  542668 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 10:17:51.583875  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.588365  542668 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:51.592438  542668 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:51.592583  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.589219  542668 out.go:179]   - Using image docker.io/busybox:stable
	I1123 10:17:51.595153  542668 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 10:17:51.595173  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 10:17:51.595246  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.615564  542668 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 10:17:51.617903  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 10:17:51.617968  542668 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 10:17:51.618068  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.631481  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.633058  542668 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 10:17:51.633216  542668 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 10:17:51.633635  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 10:17:51.633707  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.646869  542668 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 10:17:51.649895  542668 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 10:17:51.649920  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 10:17:51.649997  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.663109  542668 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 10:17:51.663131  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 10:17:51.663192  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.673254  542668 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:51.673465  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.678161  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.680341  542668 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:51.680359  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:51.680420  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.725593  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.770974  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.771728  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.785082  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.815818  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.821693  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.832842  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.837687  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.840170  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.880861  542668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:52.083458  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 10:17:52.309345  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 10:17:52.382694  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 10:17:52.386815  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 10:17:52.502170  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 10:17:52.534870  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 10:17:52.653302  542668 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 10:17:52.653390  542668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 10:17:52.681477  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 10:17:52.681541  542668 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 10:17:52.686669  542668 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 10:17:52.686747  542668 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 10:17:52.690236  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 10:17:52.690307  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 10:17:52.696553  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 10:17:52.696621  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 10:17:52.726888  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:52.778191  542668 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 10:17:52.778267  542668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 10:17:52.781074  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 10:17:52.791251  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 10:17:52.791325  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 10:17:52.793607  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 10:17:52.793661  542668 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 10:17:52.796750  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 10:17:52.830701  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:52.861095  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 10:17:52.861173  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 10:17:52.864392  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 10:17:52.864461  542668 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 10:17:52.910023  542668 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 10:17:52.910094  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 10:17:52.930921  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 10:17:52.931002  542668 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 10:17:52.935385  542668 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 10:17:52.935457  542668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 10:17:52.936714  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 10:17:52.936776  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 10:17:53.010849  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 10:17:53.010926  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 10:17:53.054548  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 10:17:53.069350  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 10:17:53.119402  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:17:53.119425  542668 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 10:17:53.123891  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 10:17:53.123915  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 10:17:53.175747  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 10:17:53.175771  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 10:17:53.207491  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 10:17:53.207518  542668 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 10:17:53.326953  542668 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.895955815s)
	I1123 10:17:53.326983  542668 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 10:17:53.327063  542668 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.446173781s)
	I1123 10:17:53.327841  542668 node_ready.go:35] waiting up to 6m0s for node "addons-832672" to be "Ready" ...
	I1123 10:17:53.365509  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:17:53.395377  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 10:17:53.395403  542668 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 10:17:53.521934  542668 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 10:17:53.521959  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 10:17:53.684176  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.600635505s)
	I1123 10:17:53.722875  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.413493637s)
	I1123 10:17:53.744682  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 10:17:53.755097  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 10:17:53.755119  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 10:17:53.831816  542668 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-832672" context rescaled to 1 replicas
	I1123 10:17:53.943632  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 10:17:53.943656  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 10:17:54.143213  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 10:17:54.143241  542668 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 10:17:54.490004  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1123 10:17:55.335146  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:17:56.579843  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.197101942s)
	W1123 10:17:57.344303  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:17:57.543269  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.15641617s)
	I1123 10:17:57.543366  542668 addons.go:495] Verifying addon ingress=true in "addons-832672"
	I1123 10:17:57.543644  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.041384866s)
	I1123 10:17:57.543726  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.00878644s)
	I1123 10:17:57.543799  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.816839429s)
	I1123 10:17:57.544021  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.762889535s)
	I1123 10:17:57.544054  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.747248929s)
	I1123 10:17:57.544098  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.713332271s)
	I1123 10:17:57.544140  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.48950994s)
	I1123 10:17:57.544256  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.474883962s)
	I1123 10:17:57.544361  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.178825151s)
	I1123 10:17:57.544373  542668 addons.go:495] Verifying addon metrics-server=true in "addons-832672"
	I1123 10:17:57.544479  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.799767043s)
	W1123 10:17:57.544501  542668 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 10:17:57.544531  542668 retry.go:31] will retry after 269.020499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 10:17:57.544699  542668 addons.go:495] Verifying addon registry=true in "addons-832672"
	I1123 10:17:57.546881  542668 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-832672 service yakd-dashboard -n yakd-dashboard
	
	I1123 10:17:57.546998  542668 out.go:179] * Verifying ingress addon...
	I1123 10:17:57.550830  542668 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 10:17:57.551153  542668 out.go:179] * Verifying registry addon...
	I1123 10:17:57.555331  542668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 10:17:57.563592  542668 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 10:17:57.563613  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:57.576586  542668 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 10:17:57.576606  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:57.814509  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 10:17:57.829933  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.339881578s)
	I1123 10:17:57.830013  542668 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-832672"
	I1123 10:17:57.833139  542668 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 10:17:57.836746  542668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 10:17:57.858719  542668 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 10:17:57.858791  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:58.057315  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:58.059559  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:58.341079  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:58.555267  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:58.559527  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:58.840289  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:59.054985  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:59.058535  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:59.311719  542668 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 10:17:59.311818  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:59.330528  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:59.348363  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:59.446644  542668 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 10:17:59.459410  542668 addons.go:239] Setting addon gcp-auth=true in "addons-832672"
	I1123 10:17:59.459506  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:59.459986  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:59.476938  542668 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 10:17:59.476991  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:59.493664  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:59.553988  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:59.558554  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1123 10:17:59.831093  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:17:59.840089  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:00.055661  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:00.077880  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:00.354001  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:00.555268  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:00.558650  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:00.624998  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.810391235s)
	I1123 10:18:00.625042  542668 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.148078409s)
	I1123 10:18:00.628036  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 10:18:00.630817  542668 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 10:18:00.633628  542668 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 10:18:00.633654  542668 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 10:18:00.648046  542668 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 10:18:00.648071  542668 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 10:18:00.664320  542668 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 10:18:00.664344  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 10:18:00.680192  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 10:18:00.841567  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:01.055661  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:01.087756  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:01.161618  542668 addons.go:495] Verifying addon gcp-auth=true in "addons-832672"
	I1123 10:18:01.165907  542668 out.go:179] * Verifying gcp-auth addon...
	I1123 10:18:01.169614  542668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 10:18:01.175821  542668 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 10:18:01.175846  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:01.344132  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:01.554514  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:01.557945  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:01.673111  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:01.831196  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:01.839817  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:02.053828  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:02.058654  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:02.173321  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:02.340793  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:02.554192  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:02.559998  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:02.672837  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:02.840359  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:03.054711  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:03.058396  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:03.173368  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:03.340498  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:03.554789  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:03.559524  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:03.673192  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:03.839768  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:04.053761  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:04.058588  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:04.173366  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:04.331812  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:04.340190  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:04.555113  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:04.559128  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:04.673338  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:04.840733  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:05.054769  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:05.058570  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:05.173241  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:05.339881  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:05.554202  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:05.558645  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:05.673049  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:05.839367  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:06.054898  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:06.059262  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:06.173031  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:06.339691  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:06.555103  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:06.559689  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:06.672852  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:06.830789  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:06.840295  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:07.054743  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:07.058565  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:07.173615  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:07.345917  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:07.553938  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:07.558544  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:07.673362  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:07.840143  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:08.054346  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:08.059436  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:08.173238  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:08.340175  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:08.554976  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:08.558505  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:08.672363  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:08.831115  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:08.839875  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:09.053872  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:09.058619  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:09.172473  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:09.341120  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:09.553806  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:09.558194  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:09.673171  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:09.840402  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:10.054758  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:10.058590  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:10.173436  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:10.340464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:10.554775  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:10.557977  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:10.673694  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:10.831553  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:10.840323  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:11.054779  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:11.058975  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:11.172787  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:11.341495  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:11.553844  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:11.559233  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:11.672858  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:11.839454  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:12.054929  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:12.058431  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:12.173075  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:12.339753  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:12.554997  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:12.560265  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:12.673487  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:12.840278  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:13.054388  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:13.058215  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:13.173121  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:13.330806  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:13.339762  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:13.554684  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:13.557859  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:13.672644  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:13.840096  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:14.054309  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:14.058162  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:14.172767  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:14.339609  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:14.554137  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:14.559445  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:14.673683  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:14.839880  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:15.054860  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:15.059874  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:15.172766  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:15.340427  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:15.554422  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:15.557729  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:15.672834  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:15.831517  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:15.840564  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:16.054779  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:16.059143  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:16.172908  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:16.340397  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:16.554624  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:16.557507  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:16.672454  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:16.839778  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:17.054144  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:17.059225  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:17.173287  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:17.340464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:17.554585  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:17.558002  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:17.672880  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:17.840207  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:18.054409  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:18.059254  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:18.172845  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:18.330541  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:18.340295  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:18.554393  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:18.557911  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:18.672664  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:18.839498  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:19.054643  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:19.057950  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:19.172501  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:19.340391  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:19.554958  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:19.558486  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:19.673314  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:19.839456  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:20.054644  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:20.059436  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:20.173571  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:20.331200  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:20.340695  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:20.555167  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:20.559632  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:20.672247  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:20.840347  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:21.054530  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:21.058138  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:21.172780  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:21.340500  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:21.553966  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:21.558499  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:21.673266  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:21.839494  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:22.054775  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:22.058507  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:22.173466  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:22.332063  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:22.341861  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:22.554068  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:22.558428  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:22.673017  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:22.849596  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:23.053714  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:23.058467  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:23.173587  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:23.341249  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:23.554257  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:23.559784  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:23.672749  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:23.840044  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:24.053971  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:24.058950  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:24.172827  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:24.341037  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:24.554423  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:24.557920  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:24.672627  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:24.831532  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:24.840171  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:25.054813  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:25.058957  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:25.172464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:25.345703  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:25.554000  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:25.558583  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:25.672731  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:25.839480  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:26.055065  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:26.059204  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:26.172863  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:26.341009  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:26.554223  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:26.558900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:26.672713  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:26.839820  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:27.054311  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:27.057960  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:27.172950  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:27.330766  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:27.340813  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:27.553843  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:27.558196  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:27.672907  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:27.839824  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:28.054159  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:28.059230  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:28.173193  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:28.340442  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:28.554538  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:28.557938  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:28.672546  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:28.840283  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:29.054543  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:29.058511  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:29.172375  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:29.331192  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:29.345900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:29.554636  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:29.557822  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:29.672553  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:29.840167  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:30.055744  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:30.060127  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:30.172959  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:30.346842  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:30.554117  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:30.558566  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:30.672450  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:30.840200  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:31.054542  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:31.059293  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:31.173243  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:31.331466  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:31.340535  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:31.554544  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:31.558006  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:31.672927  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:31.839900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:32.054280  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:32.058117  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:32.172927  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:32.339919  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:32.554713  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:32.557825  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:32.672585  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:32.839871  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:33.054365  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:33.059368  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:33.173372  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:33.339888  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:33.554059  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:33.558495  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:33.672400  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:33.831174  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:33.839890  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:34.054349  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:34.058619  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:34.173479  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:34.339964  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:34.554184  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:34.559194  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:34.672937  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:34.840099  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:35.055167  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:35.060903  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:35.173097  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:35.361562  542668 node_ready.go:49] node "addons-832672" is "Ready"
	I1123 10:18:35.361613  542668 node_ready.go:38] duration metric: took 42.033740758s for node "addons-832672" to be "Ready" ...
	I1123 10:18:35.361629  542668 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:35.361687  542668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:35.378336  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:35.399772  542668 api_server.go:72] duration metric: took 44.237100442s to wait for apiserver process to appear ...
	I1123 10:18:35.399847  542668 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:35.399894  542668 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 10:18:35.420601  542668 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 10:18:35.435734  542668 api_server.go:141] control plane version: v1.34.1
	I1123 10:18:35.435761  542668 api_server.go:131] duration metric: took 35.880113ms to wait for apiserver health ...
	I1123 10:18:35.435770  542668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:18:35.471297  542668 system_pods.go:59] 19 kube-system pods found
	I1123 10:18:35.471381  542668 system_pods.go:61] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending
	I1123 10:18:35.471404  542668 system_pods.go:61] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending
	I1123 10:18:35.471424  542668 system_pods.go:61] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending
	I1123 10:18:35.471461  542668 system_pods.go:61] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending
	I1123 10:18:35.471484  542668 system_pods.go:61] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:35.471505  542668 system_pods.go:61] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:35.471540  542668 system_pods.go:61] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:35.471560  542668 system_pods.go:61] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:35.471580  542668 system_pods.go:61] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending
	I1123 10:18:35.471601  542668 system_pods.go:61] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:35.471631  542668 system_pods.go:61] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:35.471654  542668 system_pods.go:61] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending
	I1123 10:18:35.471674  542668 system_pods.go:61] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending
	I1123 10:18:35.471693  542668 system_pods.go:61] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending
	I1123 10:18:35.471726  542668 system_pods.go:61] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending
	I1123 10:18:35.471743  542668 system_pods.go:61] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending
	I1123 10:18:35.471765  542668 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending
	I1123 10:18:35.471799  542668 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending
	I1123 10:18:35.471822  542668 system_pods.go:61] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending
	I1123 10:18:35.471843  542668 system_pods.go:74] duration metric: took 36.06738ms to wait for pod list to return data ...
	I1123 10:18:35.471879  542668 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:18:35.487089  542668 default_sa.go:45] found service account: "default"
	I1123 10:18:35.487161  542668 default_sa.go:55] duration metric: took 15.2597ms for default service account to be created ...
	I1123 10:18:35.487200  542668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:18:35.498814  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:35.498896  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending
	I1123 10:18:35.498916  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending
	I1123 10:18:35.498938  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending
	I1123 10:18:35.498974  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending
	I1123 10:18:35.498999  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:35.499020  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:35.499058  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:35.499081  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:35.499099  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending
	I1123 10:18:35.499133  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:35.499155  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:35.499173  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending
	I1123 10:18:35.499191  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending
	I1123 10:18:35.499221  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending
	I1123 10:18:35.499243  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending
	I1123 10:18:35.499262  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending
	I1123 10:18:35.499281  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending
	I1123 10:18:35.499318  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:35.499342  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending
	I1123 10:18:35.499389  542668 retry.go:31] will retry after 288.405129ms: missing components: kube-dns
	I1123 10:18:35.569732  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:35.574048  542668 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 10:18:35.574118  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:35.773898  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:35.813154  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:35.813238  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:18:35.813261  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending
	I1123 10:18:35.813297  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending
	I1123 10:18:35.813322  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending
	I1123 10:18:35.813345  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:35.813384  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:35.813434  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:35.813453  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:35.813490  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:35.813513  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:35.813535  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:35.813573  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:35.813594  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending
	I1123 10:18:35.813612  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending
	I1123 10:18:35.813631  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending
	I1123 10:18:35.813665  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending
	I1123 10:18:35.813685  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:35.813708  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:35.813742  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:18:35.813777  542668 retry.go:31] will retry after 369.032447ms: missing components: kube-dns
	I1123 10:18:35.841687  542668 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 10:18:35.841760  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:36.062766  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:36.063700  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:36.173264  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:36.275538  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:36.275620  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:18:36.275665  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 10:18:36.275686  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 10:18:36.275725  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 10:18:36.275749  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:36.275770  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:36.275810  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:36.275834  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:36.275856  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:36.275894  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:36.275918  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:36.275944  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:36.275983  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 10:18:36.276013  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 10:18:36.276036  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 10:18:36.276070  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 10:18:36.276095  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.276120  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.276159  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:18:36.276195  542668 retry.go:31] will retry after 345.412667ms: missing components: kube-dns
	I1123 10:18:36.374521  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:36.604070  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:36.604605  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:36.677320  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:36.679221  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:36.679292  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:18:36.679314  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 10:18:36.679351  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 10:18:36.679374  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 10:18:36.679393  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:36.679415  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:36.679446  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:36.679469  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:36.679492  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:36.679526  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:36.679549  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:36.679569  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:36.679609  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 10:18:36.679636  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 10:18:36.679657  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 10:18:36.679693  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 10:18:36.679719  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.679743  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.679780  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Running
	I1123 10:18:36.679815  542668 retry.go:31] will retry after 575.218512ms: missing components: kube-dns
	I1123 10:18:36.841347  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:37.054279  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:37.059036  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:37.175489  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:37.262167  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:37.262207  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Running
	I1123 10:18:37.262219  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 10:18:37.262226  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 10:18:37.262236  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 10:18:37.262296  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:37.262303  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:37.262308  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:37.262316  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:37.262322  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:37.262326  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:37.262332  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:37.262350  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:37.262360  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 10:18:37.262366  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 10:18:37.262376  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 10:18:37.262384  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 10:18:37.262394  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:37.262400  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:37.262407  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Running
	I1123 10:18:37.262415  542668 system_pods.go:126] duration metric: took 1.775192418s to wait for k8s-apps to be running ...
	I1123 10:18:37.262432  542668 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:18:37.262488  542668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:37.279223  542668 system_svc.go:56] duration metric: took 16.781757ms WaitForService to wait for kubelet
	I1123 10:18:37.279295  542668 kubeadm.go:587] duration metric: took 46.116626644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:18:37.279339  542668 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:18:37.283155  542668 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:18:37.283213  542668 node_conditions.go:123] node cpu capacity is 2
	I1123 10:18:37.283228  542668 node_conditions.go:105] duration metric: took 3.855869ms to run NodePressure ...
	I1123 10:18:37.283249  542668 start.go:242] waiting for startup goroutines ...
	I1123 10:18:37.360729  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:37.555062  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:37.559498  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:37.673624  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:37.841179  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:38.056049  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:38.059805  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:38.173031  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:38.340735  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:38.558054  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:38.561988  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:38.672959  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:38.840599  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:39.054756  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:39.058470  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:39.173240  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:39.340740  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:39.554864  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:39.558494  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:39.673171  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:39.842196  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:40.059335  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:40.059452  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:40.176230  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:40.340520  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:40.555774  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:40.559848  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:40.672935  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:40.867389  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:41.054241  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:41.058965  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:41.172844  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:41.339807  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:41.554461  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:41.558336  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:41.674256  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:41.852095  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:42.058274  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:42.059855  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:42.175424  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:42.351616  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:42.562336  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:42.567830  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:42.672663  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:42.851455  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:43.062123  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:43.063966  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:43.173871  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:43.341525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:43.555122  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:43.559138  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:43.673286  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:43.845084  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:44.064247  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:44.065425  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:44.173297  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:44.340807  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:44.566393  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:44.566592  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:44.678804  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:44.844252  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:45.057038  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:45.061455  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:45.178292  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:45.350456  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:45.555087  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:45.560084  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:45.684883  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:45.841350  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:46.055356  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:46.060810  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:46.185711  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:46.340993  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:46.553684  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:46.558059  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:46.673057  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:46.840120  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:47.054504  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:47.058857  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:47.173087  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:47.340668  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:47.554393  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:47.558019  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:47.673174  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:47.840407  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:48.055124  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:48.059751  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:48.173098  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:48.341316  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:48.554600  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:48.559476  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:48.673453  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:48.841428  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:49.054498  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:49.059099  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:49.173087  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:49.340240  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:49.554473  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:49.557869  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:49.673165  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:49.840148  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:50.054261  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:50.059292  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:50.173236  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:50.340377  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:50.554315  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:50.557779  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:50.673231  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:50.840701  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:51.054997  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:51.059180  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:51.173470  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:51.353525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:51.555326  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:51.557694  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:51.672722  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:51.842956  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:52.054983  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:52.059729  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:52.173633  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:52.341953  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:52.555581  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:52.559996  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:52.678284  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:52.841131  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:53.054346  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:53.058134  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:53.173033  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:53.341672  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:53.554965  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:53.560585  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:53.674097  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:53.840385  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:54.055516  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:54.058635  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:54.173083  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:54.340765  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:54.553961  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:54.558598  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:54.673525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:54.841651  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:55.055579  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:55.059562  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:55.173665  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:55.351604  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:55.555790  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:55.558168  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:55.674496  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:55.841951  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:56.054441  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:56.058710  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:56.172998  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:56.340353  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:56.554602  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:56.559295  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:56.674514  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:56.840851  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:57.054554  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:57.058693  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:57.173359  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:57.341941  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:57.553727  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:57.559350  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:57.673353  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:57.841393  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:58.055218  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:58.059095  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:58.172759  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:58.341276  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:58.556924  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:58.561779  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:58.673211  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:58.840581  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:59.055191  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:59.059184  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:59.173486  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:59.340760  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:59.554139  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:59.559992  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:59.673464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:59.844731  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:00.057475  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:00.068243  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:00.181323  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:00.341737  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:00.555343  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:00.558731  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:00.673276  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:00.840695  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:01.055342  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:01.059523  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:01.174098  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:01.341388  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:01.555714  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:01.558708  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:01.673578  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:01.841569  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:02.055188  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:02.059435  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:02.173722  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:02.346558  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:02.554733  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:02.558910  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:02.673491  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:02.840576  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:03.054404  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:03.059878  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:03.173330  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:03.340903  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:03.554563  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:03.559338  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:03.673548  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:03.844174  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:04.054967  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:04.063477  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:04.173781  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:04.341109  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:04.554753  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:04.567486  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:04.672787  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:04.840366  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:05.054631  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:05.058462  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:05.173138  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:05.346902  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:05.554707  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:05.558368  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:05.673558  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:05.841282  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:06.054480  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:06.060646  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:06.172602  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:06.340560  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:06.554494  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:06.558018  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:06.678691  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:06.839777  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:07.053870  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:07.058912  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:07.173131  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:07.340278  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:07.557208  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:07.559547  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:07.672564  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:07.840696  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:08.054385  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:08.058981  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:08.173112  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:08.342131  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:08.555702  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:08.559329  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:08.675718  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:08.841500  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:09.055622  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:09.060049  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:09.173831  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:09.340497  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:09.553734  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:09.558289  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:09.674232  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:09.841018  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:10.054427  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:10.058861  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:10.174050  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:10.342614  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:10.554685  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:10.558268  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:10.673710  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:10.840750  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:11.061953  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:11.067283  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:11.173852  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:11.342152  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:11.554401  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:11.557683  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:11.672448  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:11.840505  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:12.054811  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:12.059871  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:12.173086  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:12.340913  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:12.554364  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:12.559272  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:12.673874  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:12.841226  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:13.055100  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:13.059280  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:13.173241  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:13.340475  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:13.554508  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:13.557913  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:13.672808  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:13.844036  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:14.055545  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:14.058903  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:14.173148  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:14.340838  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:14.554795  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:14.558968  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:14.673362  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:14.841225  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:15.055469  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:15.059358  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:15.174041  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:15.340535  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:15.554878  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:15.558762  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:15.681617  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:15.840987  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:16.055971  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:16.059599  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:16.174267  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:16.340979  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:16.555068  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:16.559147  542668 kapi.go:107] duration metric: took 1m19.0038185s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 10:19:16.672990  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:16.840832  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:17.055351  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:17.173898  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:17.340644  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:17.555909  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:17.672702  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:17.841052  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:18.054588  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:18.172463  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:18.341305  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:18.554948  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:18.673153  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:18.841082  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:19.054294  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:19.173493  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:19.355144  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:19.555467  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:19.675097  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:19.842500  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:20.055566  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:20.173963  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:20.343073  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:20.559095  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:20.673869  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:20.840838  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:21.054065  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:21.173589  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:21.341886  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:21.554184  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:21.672831  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:21.840334  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:22.055383  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:22.173564  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:22.350265  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:22.555107  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:22.673453  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:22.841879  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:23.054348  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:23.174826  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:23.342201  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:23.555207  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:23.673204  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:23.840874  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:24.053857  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:24.172898  542668 kapi.go:107] duration metric: took 1m23.00328581s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 10:19:24.176603  542668 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-832672 cluster.
	I1123 10:19:24.179906  542668 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 10:19:24.183279  542668 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 10:19:24.340683  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:24.554502  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:24.841241  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:25.055435  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:25.341197  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:25.554657  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:25.840824  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:26.055142  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:26.340761  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:26.555058  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:26.841752  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:27.054565  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:27.340161  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:27.554117  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:27.840339  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:28.054963  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:28.340525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:28.555391  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:28.846399  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:29.055120  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:29.341033  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:29.554987  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:29.840445  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:30.078506  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:30.354912  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:30.555238  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:30.848856  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:31.054437  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:31.340967  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:31.554561  542668 kapi.go:107] duration metric: took 1m34.003732689s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 10:19:31.840812  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:32.340603  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:32.841212  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:33.341048  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:33.840900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:34.340888  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:34.841887  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:35.341292  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:35.840198  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:36.342608  542668 kapi.go:107] duration metric: took 1m38.505861652s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 10:19:36.345734  542668 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner-rancher, ingress-dns, inspektor-gadget, registry-creds, storage-provisioner, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1123 10:19:36.348663  542668 addons.go:530] duration metric: took 1m45.18555693s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner-rancher ingress-dns inspektor-gadget registry-creds storage-provisioner cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1123 10:19:36.348724  542668 start.go:247] waiting for cluster config update ...
	I1123 10:19:36.348747  542668 start.go:256] writing updated cluster config ...
	I1123 10:19:36.349059  542668 ssh_runner.go:195] Run: rm -f paused
	I1123 10:19:36.353682  542668 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:19:36.357223  542668 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zgvcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.363464  542668 pod_ready.go:94] pod "coredns-66bc5c9577-zgvcr" is "Ready"
	I1123 10:19:36.363494  542668 pod_ready.go:86] duration metric: took 6.245229ms for pod "coredns-66bc5c9577-zgvcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.365843  542668 pod_ready.go:83] waiting for pod "etcd-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.370738  542668 pod_ready.go:94] pod "etcd-addons-832672" is "Ready"
	I1123 10:19:36.370769  542668 pod_ready.go:86] duration metric: took 4.896721ms for pod "etcd-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.373388  542668 pod_ready.go:83] waiting for pod "kube-apiserver-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.378410  542668 pod_ready.go:94] pod "kube-apiserver-addons-832672" is "Ready"
	I1123 10:19:36.378441  542668 pod_ready.go:86] duration metric: took 4.989342ms for pod "kube-apiserver-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.380911  542668 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.758545  542668 pod_ready.go:94] pod "kube-controller-manager-addons-832672" is "Ready"
	I1123 10:19:36.758577  542668 pod_ready.go:86] duration metric: took 377.638378ms for pod "kube-controller-manager-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.959008  542668 pod_ready.go:83] waiting for pod "kube-proxy-snjbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.357329  542668 pod_ready.go:94] pod "kube-proxy-snjbw" is "Ready"
	I1123 10:19:37.357357  542668 pod_ready.go:86] duration metric: took 398.321212ms for pod "kube-proxy-snjbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.557685  542668 pod_ready.go:83] waiting for pod "kube-scheduler-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.957611  542668 pod_ready.go:94] pod "kube-scheduler-addons-832672" is "Ready"
	I1123 10:19:37.957639  542668 pod_ready.go:86] duration metric: took 399.927816ms for pod "kube-scheduler-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.957653  542668 pod_ready.go:40] duration metric: took 1.603935065s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:19:38.018628  542668 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:19:38.021998  542668 out.go:179] * Done! kubectl is now configured to use "addons-832672" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:21:46 addons-832672 crio[829]: time="2025-11-23T10:21:46.667635047Z" level=info msg="Removed pod sandbox: 7f2d88a577e4e7fd334af9ea6e6e0c917130dcc465bd091ce785af2a950f3bc2" id=fc96fab7-5a36-42ae-9df5-f2e8f5b83fd8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.287494996Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-5dwgn/POD" id=aa3bb5de-6bc8-4895-824b-f555c0e45d53 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.287564503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.299819811Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5dwgn Namespace:default ID:1d44acb690c9af5214bc40a287b057009902a5aac3a346bf82689d536dea4cd4 UID:b2a12077-1387-4033-b5d8-33cc0d797041 NetNS:/var/run/netns/0512e429-89ea-4801-8181-0dc981dec025 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001f793a0}] Aliases:map[]}"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.299987542Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-5dwgn to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.316739318Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5dwgn Namespace:default ID:1d44acb690c9af5214bc40a287b057009902a5aac3a346bf82689d536dea4cd4 UID:b2a12077-1387-4033-b5d8-33cc0d797041 NetNS:/var/run/netns/0512e429-89ea-4801-8181-0dc981dec025 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001f793a0}] Aliases:map[]}"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.317699865Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-5dwgn for CNI network kindnet (type=ptp)"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.3239462Z" level=info msg="Ran pod sandbox 1d44acb690c9af5214bc40a287b057009902a5aac3a346bf82689d536dea4cd4 with infra container: default/hello-world-app-5d498dc89-5dwgn/POD" id=aa3bb5de-6bc8-4895-824b-f555c0e45d53 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.325477521Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5ff8152b-09ad-46fb-b2fb-5e39b5841849 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.325721479Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5ff8152b-09ad-46fb-b2fb-5e39b5841849 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.326661038Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=5ff8152b-09ad-46fb-b2fb-5e39b5841849 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.328619884Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e33eb124-783c-4dfb-a1f0-650ce356bae5 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.334016123Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.93874068Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=e33eb124-783c-4dfb-a1f0-650ce356bae5 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.939553197Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a25ac013-67f3-4f54-aa34-38786d947a0b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.942216161Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4b20bff1-fe8f-447c-9921-cb51e66b5339 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.957464111Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-5dwgn/hello-world-app" id=5091b782-7270-4389-97f4-d17eca1cb036 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.957841405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.973763212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.97413527Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7162bc060bec02314a015159ebded8a69b5c8b321f71d9aafad135b420c66077/merged/etc/passwd: no such file or directory"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.974309624Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7162bc060bec02314a015159ebded8a69b5c8b321f71d9aafad135b420c66077/merged/etc/group: no such file or directory"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.974669374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.994637066Z" level=info msg="Created container 7813ca21f0b392b6e825be30d77b12720fbe8fd5c8a31c46447c83c5b8829972: default/hello-world-app-5d498dc89-5dwgn/hello-world-app" id=5091b782-7270-4389-97f4-d17eca1cb036 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:22:35 addons-832672 crio[829]: time="2025-11-23T10:22:35.995736101Z" level=info msg="Starting container: 7813ca21f0b392b6e825be30d77b12720fbe8fd5c8a31c46447c83c5b8829972" id=a88b1f0a-6b68-492e-933c-086dc80fe336 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:22:36 addons-832672 crio[829]: time="2025-11-23T10:22:36.001625237Z" level=info msg="Started container" PID=7057 containerID=7813ca21f0b392b6e825be30d77b12720fbe8fd5c8a31c46447c83c5b8829972 description=default/hello-world-app-5d498dc89-5dwgn/hello-world-app id=a88b1f0a-6b68-492e-933c-086dc80fe336 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d44acb690c9af5214bc40a287b057009902a5aac3a346bf82689d536dea4cd4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	7813ca21f0b39       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   1d44acb690c9a       hello-world-app-5d498dc89-5dwgn            default
	8f2c43df1d85d       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   b31e92067098b       nginx                                      default
	801617573b1b0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   f751c3794d6c3       busybox                                    default
	0d6735cfc81cc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	876f80945af82       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	413e66dc710ea       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	b6bfc4971a4ce       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	17dbb9a4b0b58       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   05c11920b8f34       ingress-nginx-controller-6c8bf45fb-qfs8k   ingress-nginx
	eb231ca13f49f       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    3                   aec688aa4054b       ingress-nginx-admission-patch-sjmvd        ingress-nginx
	d65cd5cc34cdd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   e688684a86d82       gcp-auth-78565c9fb4-mhfx4                  gcp-auth
	cd4980ae684bc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	0cab600cd36b7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   2c1383f948d9c       gadget-bh47b                               gadget
	6b8563d255a65       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   edf1281727ec5       registry-6b586f9694-n64pf                  kube-system
	59a26ed66a88a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   8d333002cc260       registry-proxy-g5zv2                       kube-system
	fac52e5468f02       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   767b7b2d2d658       snapshot-controller-7d9fbc56b8-qfdfv       kube-system
	749892c269a97       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   cfcbb8a67f3e4       yakd-dashboard-5ff678cb9-ljzns             yakd-dashboard
	bee261c58130a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	d51d99bf7ad51       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   8aaad2a8ef496       ingress-nginx-admission-create-rgg69       ingress-nginx
	13f3666d715eb       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   05cb677324305       nvidia-device-plugin-daemonset-jwlsr       kube-system
	240455e48d203       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   c1a74448032c6       csi-hostpath-attacher-0                    kube-system
	2d505f439d6fa       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   98629eb2d3035       csi-hostpath-resizer-0                     kube-system
	b161e83d129d4       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   023756d13ce97       cloud-spanner-emulator-5bdddb765-5djk5     default
	c0e97eff7ee81       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   40bef269e183a       kube-ingress-dns-minikube                  kube-system
	6f755863005bd       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   34a775e28a934       local-path-provisioner-648f6765c9-cv5hq    local-path-storage
	9892343ca47ba       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   316ecb140276b       metrics-server-85b7d694d7-lv5tb            kube-system
	6a1f9c0d3e16f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   3f813c2af9eb1       snapshot-controller-7d9fbc56b8-qsqmt       kube-system
	3419ff6dcec28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   2dd4e3c9b1061       coredns-66bc5c9577-zgvcr                   kube-system
	c8a56a4ee027a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   1f4e043abcfba       storage-provisioner                        kube-system
	3ff8fcd0337f5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   42f4390f1b22c       kindnet-vqgnm                              kube-system
	1c6ce78b41089       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   80abaedcdb8db       kube-proxy-snjbw                           kube-system
	fe381bc317e85       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             4 minutes ago            Running             etcd                                     0                   a68eae19352f1       etcd-addons-832672                         kube-system
	ed2ede976a893       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             4 minutes ago            Running             kube-scheduler                           0                   d4a406d9d0cb9       kube-scheduler-addons-832672               kube-system
	e5d0f156a4b2a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             4 minutes ago            Running             kube-apiserver                           0                   32b5944ce2bf0       kube-apiserver-addons-832672               kube-system
	3cc6c3e6832ed       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             4 minutes ago            Running             kube-controller-manager                  0                   f0c06f33634fb       kube-controller-manager-addons-832672      kube-system
	
	
	==> coredns [3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6] <==
	[INFO] 10.244.0.17:37905 - 48175 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001990544s
	[INFO] 10.244.0.17:37905 - 63604 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116498s
	[INFO] 10.244.0.17:37905 - 11398 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000298507s
	[INFO] 10.244.0.17:52158 - 11919 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000269444s
	[INFO] 10.244.0.17:52158 - 11681 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000166419s
	[INFO] 10.244.0.17:57749 - 31268 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014461s
	[INFO] 10.244.0.17:57749 - 31006 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008604s
	[INFO] 10.244.0.17:53941 - 23977 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126361s
	[INFO] 10.244.0.17:53941 - 23763 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081207s
	[INFO] 10.244.0.17:59090 - 48086 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00182049s
	[INFO] 10.244.0.17:59090 - 48275 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001920873s
	[INFO] 10.244.0.17:55030 - 31097 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154701s
	[INFO] 10.244.0.17:55030 - 31261 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000309863s
	[INFO] 10.244.0.20:46507 - 60716 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162242s
	[INFO] 10.244.0.20:40698 - 45789 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164425s
	[INFO] 10.244.0.20:44824 - 49733 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141566s
	[INFO] 10.244.0.20:56809 - 50573 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106496s
	[INFO] 10.244.0.20:60416 - 12455 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136052s
	[INFO] 10.244.0.20:59129 - 50330 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013789s
	[INFO] 10.244.0.20:44799 - 10327 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003030839s
	[INFO] 10.244.0.20:47195 - 11896 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002253178s
	[INFO] 10.244.0.20:47121 - 29779 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00195239s
	[INFO] 10.244.0.20:48042 - 52449 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001928521s
	[INFO] 10.244.0.23:38399 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196729s
	[INFO] 10.244.0.23:45422 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154834s
	
	
	==> describe nodes <==
	Name:               addons-832672
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-832672
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=addons-832672
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_17_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-832672
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-832672"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:17:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-832672
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:22:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:21:20 +0000   Sun, 23 Nov 2025 10:17:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:21:20 +0000   Sun, 23 Nov 2025 10:17:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:21:20 +0000   Sun, 23 Nov 2025 10:17:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:21:20 +0000   Sun, 23 Nov 2025 10:18:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-832672
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                bc3244ed-cf09-446c-8c77-ecf98153f57e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-5bdddb765-5djk5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  default                     hello-world-app-5d498dc89-5dwgn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-bh47b                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  gcp-auth                    gcp-auth-78565c9fb4-mhfx4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-qfs8k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m40s
	  kube-system                 coredns-66bc5c9577-zgvcr                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m45s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 csi-hostpathplugin-sftm7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-addons-832672                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m51s
	  kube-system                 kindnet-vqgnm                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m45s
	  kube-system                 kube-apiserver-addons-832672                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-controller-manager-addons-832672       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-proxy-snjbw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-scheduler-addons-832672                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 metrics-server-85b7d694d7-lv5tb             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m41s
	  kube-system                 nvidia-device-plugin-daemonset-jwlsr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-6b586f9694-n64pf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-creds-764b6fb674-6hk8b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 registry-proxy-g5zv2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-qfdfv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 snapshot-controller-7d9fbc56b8-qsqmt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  local-path-storage          local-path-provisioner-648f6765c9-cv5hq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ljzns              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m42s                  kube-proxy       
	  Normal   Starting                 4m58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m58s (x7 over 4m58s)  kubelet          Node addons-832672 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m58s (x7 over 4m58s)  kubelet          Node addons-832672 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m58s (x6 over 4m58s)  kubelet          Node addons-832672 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m51s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m51s                  kubelet          Node addons-832672 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m51s                  kubelet          Node addons-832672 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m51s                  kubelet          Node addons-832672 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m47s                  node-controller  Node addons-832672 event: Registered Node addons-832672 in Controller
	  Normal   NodeReady                4m2s                   kubelet          Node addons-832672 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	[ +29.685025] overlayfs: idmapped layers are currently not supported
	[Nov23 10:16] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8] <==
	{"level":"warn","ts":"2025-11-23T10:17:42.466074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.497862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.502818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.523627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.546473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.565767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.576651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.599457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.614153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.627931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.641600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.664480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.679424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.699541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.709791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.749935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.780235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.790582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.896848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:58.099181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:58.109744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.887052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.901090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.946385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.961863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d65cd5cc34cdd15c104c64e7e59cd6d9bdaea860b8594852cc5c94975a37f7eb] <==
	2025/11/23 10:19:23 GCP Auth Webhook started!
	2025/11/23 10:19:38 Ready to marshal response ...
	2025/11/23 10:19:38 Ready to write response ...
	2025/11/23 10:19:38 Ready to marshal response ...
	2025/11/23 10:19:38 Ready to write response ...
	2025/11/23 10:19:38 Ready to marshal response ...
	2025/11/23 10:19:38 Ready to write response ...
	2025/11/23 10:19:59 Ready to marshal response ...
	2025/11/23 10:19:59 Ready to write response ...
	2025/11/23 10:20:11 Ready to marshal response ...
	2025/11/23 10:20:11 Ready to write response ...
	2025/11/23 10:20:16 Ready to marshal response ...
	2025/11/23 10:20:16 Ready to write response ...
	2025/11/23 10:20:46 Ready to marshal response ...
	2025/11/23 10:20:46 Ready to write response ...
	2025/11/23 10:21:07 Ready to marshal response ...
	2025/11/23 10:21:07 Ready to write response ...
	2025/11/23 10:21:07 Ready to marshal response ...
	2025/11/23 10:21:07 Ready to write response ...
	2025/11/23 10:21:15 Ready to marshal response ...
	2025/11/23 10:21:15 Ready to write response ...
	2025/11/23 10:22:34 Ready to marshal response ...
	2025/11/23 10:22:34 Ready to write response ...
	
	
	==> kernel <==
	 10:22:37 up  3:05,  0 user,  load average: 0.67, 2.25, 2.97
	Linux addons-832672 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a] <==
	I1123 10:20:34.672542       1 main.go:301] handling current node
	I1123 10:20:44.675510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:20:44.675541       1 main.go:301] handling current node
	I1123 10:20:54.666712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:20:54.666822       1 main.go:301] handling current node
	I1123 10:21:04.673503       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:21:04.673536       1 main.go:301] handling current node
	I1123 10:21:14.667442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:21:14.667474       1 main.go:301] handling current node
	I1123 10:21:24.666367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:21:24.666404       1 main.go:301] handling current node
	I1123 10:21:34.673882       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:21:34.673915       1 main.go:301] handling current node
	I1123 10:21:44.670768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:21:44.670803       1 main.go:301] handling current node
	I1123 10:21:54.675072       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:21:54.675182       1 main.go:301] handling current node
	I1123 10:22:04.666379       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:22:04.666415       1 main.go:301] handling current node
	I1123 10:22:14.673513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:22:14.673547       1 main.go:301] handling current node
	I1123 10:22:24.675364       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:22:24.675398       1 main.go:301] handling current node
	I1123 10:22:34.671094       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:22:34.671129       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3] <==
	W1123 10:18:20.900986       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 10:18:20.946237       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 10:18:20.960570       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 10:18:35.365923       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.108.176:443: connect: connection refused
	E1123 10:18:35.366046       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.108.176:443: connect: connection refused" logger="UnhandledError"
	W1123 10:18:35.366901       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.108.176:443: connect: connection refused
	E1123 10:18:35.366983       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.108.176:443: connect: connection refused" logger="UnhandledError"
	W1123 10:18:35.449251       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.108.176:443: connect: connection refused
	E1123 10:18:35.449308       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.108.176:443: connect: connection refused" logger="UnhandledError"
	E1123 10:18:53.903518       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.17.130:443: connect: connection refused" logger="UnhandledError"
	W1123 10:18:53.903930       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 10:18:53.904222       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 10:18:53.904665       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.17.130:443: connect: connection refused" logger="UnhandledError"
	E1123 10:18:53.910249       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.17.130:443: connect: connection refused" logger="UnhandledError"
	I1123 10:18:54.047965       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 10:19:47.986944       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37188: use of closed network connection
	E1123 10:19:48.212658       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37230: use of closed network connection
	E1123 10:19:48.342011       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37240: use of closed network connection
	I1123 10:20:15.931547       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 10:20:16.250477       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.216.49"}
	I1123 10:20:23.338149       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1123 10:22:35.144320       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.117.154"}
	
	
	==> kube-controller-manager [3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20] <==
	I1123 10:17:50.916889       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:17:50.917966       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:17:50.917994       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:17:50.918029       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:17:50.918075       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:50.918119       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 10:17:50.918123       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:17:50.918143       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:17:50.918086       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:17:50.919484       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:17:50.919764       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:17:50.919813       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 10:17:50.919825       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:17:50.926777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:50.927790       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	E1123 10:18:20.880443       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 10:18:20.880595       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 10:18:20.880650       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 10:18:20.934741       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 10:18:20.939121       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 10:18:20.981649       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:18:21.040182       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:18:35.870936       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1123 10:18:50.993630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 10:18:51.048064       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b] <==
	I1123 10:17:54.402843       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:17:54.528348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:17:54.628971       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:17:54.629012       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 10:17:54.629086       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:17:54.780731       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:54.780784       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:17:54.788013       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:17:54.788308       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:17:54.788323       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:54.789636       1 config.go:200] "Starting service config controller"
	I1123 10:17:54.789645       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:17:54.789660       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:17:54.789664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:17:54.789675       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:17:54.789680       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:17:54.790291       1 config.go:309] "Starting node config controller"
	I1123 10:17:54.790297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:17:54.790303       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:17:54.892983       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:17:54.893024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:17:54.893060       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4] <==
	I1123 10:17:44.175845       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:17:44.175920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:17:44.181546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 10:17:44.184113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:17:44.184259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:17:44.184375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:17:44.184476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:17:44.184631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:17:44.187436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:17:44.187636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:17:44.187731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:17:44.187784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:17:44.187893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:17:44.187942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:17:44.187987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:17:44.188032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:17:44.188065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:17:44.188115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:17:44.188147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:17:44.188328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:17:44.188383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:17:45.029620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:17:45.047078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:17:45.373210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 10:17:47.175895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.740783    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-gcp-creds\") pod \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\" (UID: \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\") "
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.740859    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5sz6\" (UniqueName: \"kubernetes.io/projected/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-kube-api-access-m5sz6\") pod \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\" (UID: \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\") "
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.740887    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-data\") pod \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\" (UID: \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\") "
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.740908    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-script\") pod \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\" (UID: \"7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456\") "
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.741848    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-script" (OuterVolumeSpecName: "script") pod "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456" (UID: "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.741901    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-data" (OuterVolumeSpecName: "data") pod "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456" (UID: "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.741922    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456" (UID: "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.743576    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-kube-api-access-m5sz6" (OuterVolumeSpecName: "kube-api-access-m5sz6") pod "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456" (UID: "7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456"). InnerVolumeSpecName "kube-api-access-m5sz6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.842285    1277 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-data\") on node \"addons-832672\" DevicePath \"\""
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.842335    1277 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-script\") on node \"addons-832672\" DevicePath \"\""
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.842346    1277 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-gcp-creds\") on node \"addons-832672\" DevicePath \"\""
	Nov 23 10:21:17 addons-832672 kubelet[1277]: I1123 10:21:17.842369    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5sz6\" (UniqueName: \"kubernetes.io/projected/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456-kube-api-access-m5sz6\") on node \"addons-832672\" DevicePath \"\""
	Nov 23 10:21:18 addons-832672 kubelet[1277]: I1123 10:21:18.660386    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc9a331c13f59842be8211b92e0cfeed60520fb6eb1863cafe4059093d28672"
	Nov 23 10:21:18 addons-832672 kubelet[1277]: E1123 10:21:18.662206    1277 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-158fdf5f-6f36-438b-8fb9-88aab27655a3\" is forbidden: User \"system:node:addons-832672\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-832672' and this object" podUID="7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456" pod="local-path-storage/helper-pod-delete-pvc-158fdf5f-6f36-438b-8fb9-88aab27655a3"
	Nov 23 10:21:20 addons-832672 kubelet[1277]: I1123 10:21:20.511895    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456" path="/var/lib/kubelet/pods/7b1e8fb7-6a08-4b1e-8fe8-6e1eac2e5456/volumes"
	Nov 23 10:21:33 addons-832672 kubelet[1277]: I1123 10:21:33.508645    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-n64pf" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 10:21:42 addons-832672 kubelet[1277]: I1123 10:21:42.508707    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jwlsr" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 10:21:42 addons-832672 kubelet[1277]: I1123 10:21:42.508947    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-g5zv2" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 10:21:46 addons-832672 kubelet[1277]: I1123 10:21:46.588276    1277 scope.go:117] "RemoveContainer" containerID="22434c3748a3f8b6f2b7f013d7f28cb1051f236c5cf06e7b2bb672fca09643fe"
	Nov 23 10:21:46 addons-832672 kubelet[1277]: I1123 10:21:46.610350    1277 scope.go:117] "RemoveContainer" containerID="698f0710440b2cc16b9af874544e4b25b84d15e51a97f4570f6bd4fba52a57cb"
	Nov 23 10:21:46 addons-832672 kubelet[1277]: I1123 10:21:46.622759    1277 scope.go:117] "RemoveContainer" containerID="47dc475f7862bada97beaf98c715edb17399a1c28b03d32ccc9b4177600357d3"
	Nov 23 10:21:46 addons-832672 kubelet[1277]: E1123 10:21:46.630062    1277 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/local-path-storage_helper-pod-create-pvc-158fdf5f-6f36-438b-8fb9-88aab27655a3_1f365723-5b7e-41b7-a751-3b721980e8f9/helper-pod/0.log" to get inode usage: stat /var/log/pods/local-path-storage_helper-pod-create-pvc-158fdf5f-6f36-438b-8fb9-88aab27655a3_1f365723-5b7e-41b7-a751-3b721980e8f9/helper-pod/0.log: no such file or directory
	Nov 23 10:21:46 addons-832672 kubelet[1277]: E1123 10:21:46.630814    1277 manager.go:1116] Failed to create existing container: /crio/crio-698f0710440b2cc16b9af874544e4b25b84d15e51a97f4570f6bd4fba52a57cb: Error finding container 698f0710440b2cc16b9af874544e4b25b84d15e51a97f4570f6bd4fba52a57cb: Status 404 returned error can't find the container with id 698f0710440b2cc16b9af874544e4b25b84d15e51a97f4570f6bd4fba52a57cb
	Nov 23 10:22:35 addons-832672 kubelet[1277]: I1123 10:22:35.052806    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b2a12077-1387-4033-b5d8-33cc0d797041-gcp-creds\") pod \"hello-world-app-5d498dc89-5dwgn\" (UID: \"b2a12077-1387-4033-b5d8-33cc0d797041\") " pod="default/hello-world-app-5d498dc89-5dwgn"
	Nov 23 10:22:35 addons-832672 kubelet[1277]: I1123 10:22:35.052883    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rd7p\" (UniqueName: \"kubernetes.io/projected/b2a12077-1387-4033-b5d8-33cc0d797041-kube-api-access-7rd7p\") pod \"hello-world-app-5d498dc89-5dwgn\" (UID: \"b2a12077-1387-4033-b5d8-33cc0d797041\") " pod="default/hello-world-app-5d498dc89-5dwgn"
	
	
	==> storage-provisioner [c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465] <==
	W1123 10:22:11.681000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:13.683589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:13.688439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:15.691355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:15.695883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:17.698997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:17.705393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:19.708346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:19.713061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:21.716472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:21.721086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:23.724235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:23.731246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:25.734749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:25.739162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:27.742842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:27.749481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:29.753091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:29.760204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:31.763253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:31.767502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:33.771175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:33.775248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:35.779747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:22:35.788200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-832672 -n addons-832672
helpers_test.go:269: (dbg) Run:  kubectl --context addons-832672 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-832672 describe pod ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-832672 describe pod ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b: exit status 1 (83.533866ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rgg69" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sjmvd" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6hk8b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-832672 describe pod ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (319.389933ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:22:38.342480  552180 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:22:38.344017  552180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:22:38.344033  552180 out.go:374] Setting ErrFile to fd 2...
	I1123 10:22:38.344039  552180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:22:38.344362  552180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:22:38.344823  552180 mustload.go:66] Loading cluster: addons-832672
	I1123 10:22:38.345273  552180 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:22:38.345287  552180 addons.go:622] checking whether the cluster is paused
	I1123 10:22:38.345448  552180 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:22:38.345462  552180 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:22:38.346236  552180 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:22:38.376786  552180 ssh_runner.go:195] Run: systemctl --version
	I1123 10:22:38.376854  552180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:22:38.411029  552180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:22:38.520089  552180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:22:38.520189  552180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:22:38.549552  552180 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:22:38.549582  552180 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:22:38.549588  552180 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:22:38.549621  552180 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:22:38.549631  552180 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:22:38.549635  552180 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:22:38.549638  552180 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:22:38.549642  552180 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:22:38.549645  552180 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:22:38.549652  552180 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:22:38.549658  552180 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:22:38.549662  552180 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:22:38.549665  552180 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:22:38.549668  552180 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:22:38.549671  552180 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:22:38.549676  552180 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:22:38.549703  552180 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:22:38.549709  552180 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:22:38.549716  552180 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:22:38.549719  552180 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:22:38.549725  552180 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:22:38.549729  552180 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:22:38.549734  552180 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:22:38.549737  552180 cri.go:89] found id: ""
	I1123 10:22:38.549807  552180 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:22:38.564989  552180 out.go:203] 
	W1123 10:22:38.568654  552180 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:22:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:22:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:22:38.568681  552180 out.go:285] * 
	* 
	W1123 10:22:38.575815  552180 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:22:38.579084  552180 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable ingress --alsologtostderr -v=1: exit status 11 (262.664507ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:22:38.641335  552293 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:22:38.641997  552293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:22:38.642014  552293 out.go:374] Setting ErrFile to fd 2...
	I1123 10:22:38.642020  552293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:22:38.642326  552293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:22:38.642655  552293 mustload.go:66] Loading cluster: addons-832672
	I1123 10:22:38.643082  552293 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:22:38.643103  552293 addons.go:622] checking whether the cluster is paused
	I1123 10:22:38.643254  552293 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:22:38.643273  552293 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:22:38.643833  552293 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:22:38.660574  552293 ssh_runner.go:195] Run: systemctl --version
	I1123 10:22:38.660642  552293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:22:38.677147  552293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:22:38.780150  552293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:22:38.780275  552293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:22:38.812481  552293 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:22:38.812506  552293 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:22:38.812511  552293 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:22:38.812515  552293 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:22:38.812518  552293 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:22:38.812522  552293 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:22:38.812526  552293 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:22:38.812529  552293 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:22:38.812563  552293 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:22:38.812570  552293 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:22:38.812573  552293 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:22:38.812577  552293 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:22:38.812580  552293 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:22:38.812583  552293 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:22:38.812587  552293 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:22:38.812597  552293 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:22:38.812601  552293 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:22:38.812608  552293 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:22:38.812611  552293 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:22:38.812631  552293 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:22:38.812645  552293 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:22:38.812649  552293 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:22:38.812662  552293 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:22:38.812672  552293 cri.go:89] found id: ""
	I1123 10:22:38.812754  552293 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:22:38.827874  552293 out.go:203] 
	W1123 10:22:38.830804  552293 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:22:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:22:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:22:38.830829  552293 out.go:285] * 
	* 
	W1123 10:22:38.837907  552293 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:22:38.840829  552293 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bh47b" [ee1bd3e2-fe04-4a80-bca1-481146997d23] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003901443s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (316.049825ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:15.348623  549753 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:15.349367  549753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:15.349384  549753 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:15.349391  549753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:15.349674  549753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:20:15.349962  549753 mustload.go:66] Loading cluster: addons-832672
	I1123 10:20:15.350345  549753 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:15.350362  549753 addons.go:622] checking whether the cluster is paused
	I1123 10:20:15.350473  549753 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:15.350485  549753 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:20:15.351186  549753 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:20:15.373535  549753 ssh_runner.go:195] Run: systemctl --version
	I1123 10:20:15.373593  549753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:20:15.391582  549753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:20:15.504704  549753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:20:15.504806  549753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:20:15.559881  549753 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:20:15.559902  549753 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:20:15.559908  549753 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:20:15.559916  549753 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:20:15.559920  549753 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:20:15.559924  549753 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:20:15.559928  549753 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:20:15.559931  549753 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:20:15.559935  549753 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:20:15.559942  549753 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:20:15.559946  549753 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:20:15.559949  549753 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:20:15.559952  549753 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:20:15.559955  549753 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:20:15.559959  549753 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:20:15.559964  549753 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:20:15.559967  549753 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:20:15.559971  549753 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:20:15.559975  549753 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:20:15.559978  549753 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:20:15.559986  549753 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:20:15.559994  549753 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:20:15.559997  549753 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:20:15.560000  549753 cri.go:89] found id: ""
	I1123 10:20:15.560058  549753 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:20:15.577322  549753 out.go:203] 
	W1123 10:20:15.581577  549753 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:20:15.581601  549753 out.go:285] * 
	* 
	W1123 10:20:15.589023  549753 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:20:15.592185  549753 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.140289ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004678275s
addons_test.go:463: (dbg) Run:  kubectl --context addons-832672 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (307.89669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:10.054547  549610 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:10.055447  549610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:10.055487  549610 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:10.055510  549610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:10.055832  549610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:20:10.056179  549610 mustload.go:66] Loading cluster: addons-832672
	I1123 10:20:10.056624  549610 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:10.056663  549610 addons.go:622] checking whether the cluster is paused
	I1123 10:20:10.056816  549610 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:10.056851  549610 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:20:10.057514  549610 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:20:10.083132  549610 ssh_runner.go:195] Run: systemctl --version
	I1123 10:20:10.083198  549610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:20:10.102297  549610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:20:10.208191  549610 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:20:10.208288  549610 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:20:10.243982  549610 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:20:10.244007  549610 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:20:10.244012  549610 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:20:10.244017  549610 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:20:10.244021  549610 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:20:10.244025  549610 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:20:10.244031  549610 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:20:10.244035  549610 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:20:10.244038  549610 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:20:10.244051  549610 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:20:10.244058  549610 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:20:10.244062  549610 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:20:10.244065  549610 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:20:10.244068  549610 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:20:10.244071  549610 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:20:10.244080  549610 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:20:10.244093  549610 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:20:10.244098  549610 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:20:10.244101  549610 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:20:10.244104  549610 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:20:10.244109  549610 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:20:10.244112  549610 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:20:10.244115  549610 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:20:10.244119  549610 cri.go:89] found id: ""
	I1123 10:20:10.244171  549610 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:20:10.259779  549610 out.go:203] 
	W1123 10:20:10.263106  549610 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:20:10.263129  549610 out.go:285] * 
	* 
	W1123 10:20:10.270356  549610 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:20:10.273747  549610 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 10:19:51.810689  541900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 10:19:51.816329  541900 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 10:19:51.816356  541900 kapi.go:107] duration metric: took 5.69199ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.703625ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-832672 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-832672 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [81f8dbc0-0a06-4eed-a9a5-ccded85aa6ee] Pending
helpers_test.go:352: "task-pv-pod" [81f8dbc0-0a06-4eed-a9a5-ccded85aa6ee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [81f8dbc0-0a06-4eed-a9a5-ccded85aa6ee] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003747013s
addons_test.go:572: (dbg) Run:  kubectl --context addons-832672 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-832672 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-832672 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-832672 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-832672 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-832672 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-832672 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [bf3663ba-6b34-45d4-8c86-df3a56ca522f] Pending
helpers_test.go:352: "task-pv-pod-restore" [bf3663ba-6b34-45d4-8c86-df3a56ca522f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [bf3663ba-6b34-45d4-8c86-df3a56ca522f] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003566565s
addons_test.go:614: (dbg) Run:  kubectl --context addons-832672 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-832672 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-832672 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (271.181087ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:54.017001  550647 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:54.017950  550647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:54.018007  550647 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:54.018029  550647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:54.018344  550647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:20:54.018795  550647 mustload.go:66] Loading cluster: addons-832672
	I1123 10:20:54.019351  550647 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:54.019429  550647 addons.go:622] checking whether the cluster is paused
	I1123 10:20:54.019566  550647 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:54.019604  550647 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:20:54.020224  550647 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:20:54.041187  550647 ssh_runner.go:195] Run: systemctl --version
	I1123 10:20:54.041236  550647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:20:54.062405  550647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:20:54.173122  550647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:20:54.173310  550647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:20:54.203560  550647 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:20:54.203592  550647 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:20:54.203599  550647 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:20:54.203603  550647 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:20:54.203607  550647 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:20:54.203611  550647 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:20:54.203615  550647 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:20:54.203619  550647 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:20:54.203622  550647 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:20:54.203629  550647 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:20:54.203636  550647 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:20:54.203640  550647 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:20:54.203643  550647 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:20:54.203646  550647 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:20:54.203650  550647 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:20:54.203657  550647 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:20:54.203661  550647 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:20:54.203665  550647 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:20:54.203668  550647 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:20:54.203671  550647 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:20:54.203676  550647 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:20:54.203679  550647 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:20:54.203682  550647 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:20:54.203685  550647 cri.go:89] found id: ""
	I1123 10:20:54.203743  550647 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:20:54.218208  550647 out.go:203] 
	W1123 10:20:54.221147  550647 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:20:54.221168  550647 out.go:285] * 
	* 
	W1123 10:20:54.228423  550647 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:20:54.231356  550647 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (276.103169ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:54.285668  550691 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:54.286467  550691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:54.286481  550691 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:54.286488  550691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:54.286784  550691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:20:54.287109  550691 mustload.go:66] Loading cluster: addons-832672
	I1123 10:20:54.287520  550691 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:54.287547  550691 addons.go:622] checking whether the cluster is paused
	I1123 10:20:54.287692  550691 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:20:54.287710  550691 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:20:54.288266  550691 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:20:54.309880  550691 ssh_runner.go:195] Run: systemctl --version
	I1123 10:20:54.310036  550691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:20:54.331607  550691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:20:54.444325  550691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:20:54.444425  550691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:20:54.478383  550691 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:20:54.478406  550691 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:20:54.478412  550691 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:20:54.478416  550691 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:20:54.478419  550691 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:20:54.478422  550691 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:20:54.478425  550691 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:20:54.478428  550691 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:20:54.478432  550691 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:20:54.478438  550691 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:20:54.478441  550691 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:20:54.478444  550691 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:20:54.478447  550691 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:20:54.478450  550691 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:20:54.478454  550691 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:20:54.478459  550691 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:20:54.478467  550691 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:20:54.478470  550691 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:20:54.478474  550691 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:20:54.478477  550691 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:20:54.478482  550691 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:20:54.478485  550691 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:20:54.478488  550691 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:20:54.478491  550691 cri.go:89] found id: ""
	I1123 10:20:54.478546  550691 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:20:54.493738  550691 out.go:203] 
	W1123 10:20:54.496568  550691 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:20:54.496596  550691 out.go:285] * 
	* 
	W1123 10:20:54.503748  550691 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:20:54.506880  550691 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (62.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-832672 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-832672 --alsologtostderr -v=1: exit status 11 (264.009038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:19:48.671322  548794 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:19:48.672245  548794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:48.672289  548794 out.go:374] Setting ErrFile to fd 2...
	I1123 10:19:48.672315  548794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:48.672598  548794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:19:48.672949  548794 mustload.go:66] Loading cluster: addons-832672
	I1123 10:19:48.673368  548794 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:48.673491  548794 addons.go:622] checking whether the cluster is paused
	I1123 10:19:48.673650  548794 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:48.673692  548794 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:19:48.674252  548794 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:19:48.691987  548794 ssh_runner.go:195] Run: systemctl --version
	I1123 10:19:48.692056  548794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:19:48.709625  548794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:19:48.815793  548794 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:19:48.815922  548794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:19:48.843618  548794 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:19:48.843642  548794 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:19:48.843647  548794 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:19:48.843652  548794 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:19:48.843656  548794 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:19:48.843664  548794 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:19:48.843667  548794 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:19:48.843697  548794 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:19:48.843707  548794 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:19:48.843721  548794 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:19:48.843724  548794 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:19:48.843732  548794 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:19:48.843738  548794 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:19:48.843741  548794 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:19:48.843744  548794 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:19:48.843749  548794 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:19:48.843766  548794 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:19:48.843787  548794 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:19:48.843800  548794 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:19:48.843803  548794 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:19:48.843808  548794 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:19:48.843811  548794 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:19:48.843814  548794 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:19:48.843818  548794 cri.go:89] found id: ""
	I1123 10:19:48.843877  548794 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:48.858521  548794 out.go:203] 
	W1123 10:19:48.861474  548794 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:19:48.861504  548794 out.go:285] * 
	* 
	W1123 10:19:48.868652  548794 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:19:48.871629  548794 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-832672 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-832672
helpers_test.go:243: (dbg) docker inspect addons-832672:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2",
	        "Created": "2025-11-23T10:17:20.139779283Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543077,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:20.198692713Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/hosts",
	        "LogPath": "/var/lib/docker/containers/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2/3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2-json.log",
	        "Name": "/addons-832672",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-832672:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-832672",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3d8dabe9a4104e93d6fa2c694baa322dfc816e04f9fd894ebcbc42c2693e24f2",
	                "LowerDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f69b5e99f4598ad2809e7c816c90cefff9729069b6d1d9da9b4fc2f611181d0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-832672",
	                "Source": "/var/lib/docker/volumes/addons-832672/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-832672",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-832672",
	                "name.minikube.sigs.k8s.io": "addons-832672",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ac81a0ab4c67821052f538558acd818d27ad7628f4ba5d58d6456ceab807b45",
	            "SandboxKey": "/var/run/docker/netns/2ac81a0ab4c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-832672": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:2d:9c:bd:78:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39f142dc2e0330f3f717f783158c0e1012182cbdd04b57850dea4f941ef1a75a",
	                    "EndpointID": "a29433a4ffeb1b5ba94266313b3744451b16f65f3ca3f5e591814d37cec482de",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-832672",
	                        "3d8dabe9a410"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-832672 -n addons-832672
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-832672 logs -n 25: (1.435588258s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-038654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-038654   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p download-only-038654                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-038654   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ -o=json --download-only -p download-only-263851 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-263851   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p download-only-263851                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-263851   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p download-only-038654                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-038654   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p download-only-263851                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-263851   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ --download-only -p download-docker-549884 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-549884 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ delete  │ -p download-docker-549884                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-549884 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ --download-only -p binary-mirror-279599 --alsologtostderr --binary-mirror http://127.0.0.1:42529 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-279599   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ delete  │ -p binary-mirror-279599                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-279599   │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ addons  │ enable dashboard -p addons-832672                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ addons  │ disable dashboard -p addons-832672                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ start   │ -p addons-832672 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:19 UTC │
	│ addons  │ addons-832672 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ addons  │ addons-832672 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-832672 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-832672          │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:16:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:16:54.363486  542668 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:54.363629  542668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:54.363641  542668 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:54.363646  542668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:54.364008  542668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:16:54.364813  542668 out.go:368] Setting JSON to false
	I1123 10:16:54.365700  542668 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10763,"bootTime":1763882251,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:16:54.365775  542668 start.go:143] virtualization:  
	I1123 10:16:54.369248  542668 out.go:179] * [addons-832672] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:16:54.372992  542668 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:16:54.373124  542668 notify.go:221] Checking for updates...
	I1123 10:16:54.379330  542668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:16:54.382193  542668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:16:54.385120  542668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 10:16:54.388011  542668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:16:54.390873  542668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:16:54.393899  542668 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:16:54.429288  542668 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:16:54.429436  542668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:54.492185  542668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 10:16:54.482756046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:54.492303  542668 docker.go:319] overlay module found
	I1123 10:16:54.495485  542668 out.go:179] * Using the docker driver based on user configuration
	I1123 10:16:54.498197  542668 start.go:309] selected driver: docker
	I1123 10:16:54.498216  542668 start.go:927] validating driver "docker" against <nil>
	I1123 10:16:54.498230  542668 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:16:54.498954  542668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:54.550202  542668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 10:16:54.541624639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:54.550387  542668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:16:54.550610  542668 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:54.553324  542668 out.go:179] * Using Docker driver with root privileges
	I1123 10:16:54.556037  542668 cni.go:84] Creating CNI manager for ""
	I1123 10:16:54.556114  542668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:54.556128  542668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:16:54.556205  542668 start.go:353] cluster config:
	{Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 10:16:54.559233  542668 out.go:179] * Starting "addons-832672" primary control-plane node in "addons-832672" cluster
	I1123 10:16:54.562058  542668 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:16:54.564914  542668 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:16:54.567683  542668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:54.567731  542668 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:16:54.567743  542668 cache.go:65] Caching tarball of preloaded images
	I1123 10:16:54.567751  542668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:16:54.567844  542668 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:16:54.567855  542668 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:16:54.568192  542668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/config.json ...
	I1123 10:16:54.568222  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/config.json: {Name:mk91c43859c1618dd2f2f8557f3936708ed084f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:54.583683  542668 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 10:16:54.583827  542668 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 10:16:54.583846  542668 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 10:16:54.583851  542668 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 10:16:54.583858  542668 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 10:16:54.583863  542668 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 10:17:13.225834  542668 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 10:17:13.225870  542668 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:17:13.225912  542668 start.go:360] acquireMachinesLock for addons-832672: {Name:mkc984d0fcfcecd7b88c6de76ca17d111bad3a06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:17:13.226031  542668 start.go:364] duration metric: took 97.929µs to acquireMachinesLock for "addons-832672"
	I1123 10:17:13.226058  542668 start.go:93] Provisioning new machine with config: &{Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:13.226131  542668 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:17:13.229562  542668 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 10:17:13.229810  542668 start.go:159] libmachine.API.Create for "addons-832672" (driver="docker")
	I1123 10:17:13.229846  542668 client.go:173] LocalClient.Create starting
	I1123 10:17:13.229970  542668 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 10:17:13.335853  542668 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 10:17:13.793308  542668 cli_runner.go:164] Run: docker network inspect addons-832672 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:17:13.809247  542668 cli_runner.go:211] docker network inspect addons-832672 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:17:13.809338  542668 network_create.go:284] running [docker network inspect addons-832672] to gather additional debugging logs...
	I1123 10:17:13.809359  542668 cli_runner.go:164] Run: docker network inspect addons-832672
	W1123 10:17:13.825324  542668 cli_runner.go:211] docker network inspect addons-832672 returned with exit code 1
	I1123 10:17:13.825357  542668 network_create.go:287] error running [docker network inspect addons-832672]: docker network inspect addons-832672: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-832672 not found
	I1123 10:17:13.825371  542668 network_create.go:289] output of [docker network inspect addons-832672]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-832672 not found
	
	** /stderr **
	I1123 10:17:13.825515  542668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:13.841522  542668 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ebdb0}
	I1123 10:17:13.841566  542668 network_create.go:124] attempt to create docker network addons-832672 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 10:17:13.841626  542668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-832672 addons-832672
	I1123 10:17:13.903373  542668 network_create.go:108] docker network addons-832672 192.168.49.0/24 created
	I1123 10:17:13.903407  542668 kic.go:121] calculated static IP "192.168.49.2" for the "addons-832672" container
	I1123 10:17:13.903496  542668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:17:13.918266  542668 cli_runner.go:164] Run: docker volume create addons-832672 --label name.minikube.sigs.k8s.io=addons-832672 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:17:13.935722  542668 oci.go:103] Successfully created a docker volume addons-832672
	I1123 10:17:13.935818  542668 cli_runner.go:164] Run: docker run --rm --name addons-832672-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-832672 --entrypoint /usr/bin/test -v addons-832672:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:17:15.672992  542668 cli_runner.go:217] Completed: docker run --rm --name addons-832672-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-832672 --entrypoint /usr/bin/test -v addons-832672:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.737135065s)
	I1123 10:17:15.673025  542668 oci.go:107] Successfully prepared a docker volume addons-832672
	I1123 10:17:15.673074  542668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:15.673090  542668 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:17:15.673161  542668 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-832672:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:17:20.069440  542668 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-832672:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.396201507s)
	I1123 10:17:20.069472  542668 kic.go:203] duration metric: took 4.396379356s to extract preloaded images to volume ...
	W1123 10:17:20.069624  542668 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:17:20.069737  542668 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:17:20.125273  542668 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-832672 --name addons-832672 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-832672 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-832672 --network addons-832672 --ip 192.168.49.2 --volume addons-832672:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:17:20.418074  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Running}}
	I1123 10:17:20.444024  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:20.470440  542668 cli_runner.go:164] Run: docker exec addons-832672 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:17:20.538097  542668 oci.go:144] the created container "addons-832672" has a running status.
	I1123 10:17:20.538124  542668 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa...
	I1123 10:17:20.829235  542668 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:17:20.852314  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:20.875514  542668 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:17:20.875535  542668 kic_runner.go:114] Args: [docker exec --privileged addons-832672 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:17:20.949228  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:20.969584  542668 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:20.969681  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:20.991172  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:20.991482  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:20.991491  542668 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:20.993146  542668 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:17:24.153197  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-832672
	
	I1123 10:17:24.153222  542668 ubuntu.go:182] provisioning hostname "addons-832672"
	I1123 10:17:24.153301  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.171478  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.171804  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:24.171820  542668 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-832672 && echo "addons-832672" | sudo tee /etc/hostname
	I1123 10:17:24.331120  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-832672
	
	I1123 10:17:24.331203  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.350811  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.351140  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:24.351163  542668 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-832672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-832672/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-832672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:24.505929  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:24.505960  542668 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 10:17:24.505988  542668 ubuntu.go:190] setting up certificates
	I1123 10:17:24.505998  542668 provision.go:84] configureAuth start
	I1123 10:17:24.506065  542668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-832672
	I1123 10:17:24.523675  542668 provision.go:143] copyHostCerts
	I1123 10:17:24.523769  542668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 10:17:24.523894  542668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 10:17:24.523962  542668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 10:17:24.524015  542668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.addons-832672 san=[127.0.0.1 192.168.49.2 addons-832672 localhost minikube]
	I1123 10:17:24.659299  542668 provision.go:177] copyRemoteCerts
	I1123 10:17:24.659370  542668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:24.659415  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.681787  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:24.785152  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:17:24.803752  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:24.821501  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 10:17:24.839244  542668 provision.go:87] duration metric: took 333.216334ms to configureAuth
	I1123 10:17:24.839273  542668 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:24.839475  542668 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:24.839591  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:24.857708  542668 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.858013  542668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33511 <nil> <nil>}
	I1123 10:17:24.858031  542668 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:25.157246  542668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:25.157266  542668 machine.go:97] duration metric: took 4.187664851s to provisionDockerMachine
	I1123 10:17:25.157276  542668 client.go:176] duration metric: took 11.927419376s to LocalClient.Create
	I1123 10:17:25.157296  542668 start.go:167] duration metric: took 11.927487413s to libmachine.API.Create "addons-832672"
	I1123 10:17:25.157303  542668 start.go:293] postStartSetup for "addons-832672" (driver="docker")
	I1123 10:17:25.157313  542668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:25.157374  542668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:25.157440  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.175535  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.281712  542668 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:25.285361  542668 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:25.285388  542668 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:25.285400  542668 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 10:17:25.285490  542668 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 10:17:25.285518  542668 start.go:296] duration metric: took 128.209687ms for postStartSetup
	I1123 10:17:25.285837  542668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-832672
	I1123 10:17:25.302852  542668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/config.json ...
	I1123 10:17:25.303144  542668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:25.303206  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.320008  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.422340  542668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:25.427044  542668 start.go:128] duration metric: took 12.200898307s to createHost
	I1123 10:17:25.427072  542668 start.go:83] releasing machines lock for "addons-832672", held for 12.201031692s
	I1123 10:17:25.427144  542668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-832672
	I1123 10:17:25.444345  542668 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:25.444402  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.444410  542668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:25.444476  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:25.469044  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.477787  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:25.577234  542668 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:25.671815  542668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:25.708506  542668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:25.713453  542668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:25.713555  542668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:25.741871  542668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:17:25.741898  542668 start.go:496] detecting cgroup driver to use...
	I1123 10:17:25.741931  542668 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:17:25.741988  542668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:25.759835  542668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:25.771979  542668 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:25.772043  542668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:25.789006  542668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:25.806801  542668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:25.929731  542668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:26.054348  542668 docker.go:234] disabling docker service ...
	I1123 10:17:26.054476  542668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:26.078246  542668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:26.091971  542668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:26.213478  542668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:26.331479  542668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:26.343886  542668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:26.357910  542668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:26.357977  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.367058  542668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:17:26.367201  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.375945  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.384387  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.392873  542668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:26.400822  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.409373  542668 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.423547  542668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:26.432702  542668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:26.441223  542668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:26.448822  542668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:26.560456  542668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:26.740523  542668 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:26.740615  542668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:26.744250  542668 start.go:564] Will wait 60s for crictl version
	I1123 10:17:26.744361  542668 ssh_runner.go:195] Run: which crictl
	I1123 10:17:26.747778  542668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:26.775219  542668 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:26.775410  542668 ssh_runner.go:195] Run: crio --version
	I1123 10:17:26.805965  542668 ssh_runner.go:195] Run: crio --version
	I1123 10:17:26.835781  542668 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:26.838614  542668 cli_runner.go:164] Run: docker network inspect addons-832672 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:26.858425  542668 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:26.862338  542668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:26.872345  542668 kubeadm.go:884] updating cluster {Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:26.872473  542668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:26.872537  542668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:26.908958  542668 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:26.908984  542668 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:26.909041  542668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:26.934054  542668 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:26.934078  542668 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:26.934088  542668 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:26.934181  542668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-832672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:26.934298  542668 ssh_runner.go:195] Run: crio config
	I1123 10:17:27.004935  542668 cni.go:84] Creating CNI manager for ""
	I1123 10:17:27.004956  542668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:27.004980  542668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:27.005016  542668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-832672 NodeName:addons-832672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:27.005180  542668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-832672"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:27.005274  542668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:27.015515  542668 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:27.015608  542668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:27.023706  542668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 10:17:27.036964  542668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:27.050072  542668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1123 10:17:27.063584  542668 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:27.067384  542668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:27.077284  542668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:27.189368  542668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:27.210093  542668 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672 for IP: 192.168.49.2
	I1123 10:17:27.210168  542668 certs.go:195] generating shared ca certs ...
	I1123 10:17:27.210203  542668 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.210389  542668 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 10:17:27.613414  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt ...
	I1123 10:17:27.613455  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt: {Name:mke30750f9c6ff0fde60b494542df07664fb1b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.613668  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key ...
	I1123 10:17:27.613680  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key: {Name:mkeefc63f05e517f4e56dec8685a29e5c333b1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.613766  542668 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 10:17:27.678213  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt ...
	I1123 10:17:27.678243  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt: {Name:mk910064634b90b3a357667f6d1c2c6ae9d2cbfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.678398  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key ...
	I1123 10:17:27.678418  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key: {Name:mk70ed3d2d9f99deb614a9f3da65b3eec4847bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.678499  542668 certs.go:257] generating profile certs ...
	I1123 10:17:27.678565  542668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.key
	I1123 10:17:27.678582  542668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt with IP's: []
	I1123 10:17:28.005755  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt ...
	I1123 10:17:28.005793  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: {Name:mk3212e233b345c80c7f5646a85d42fdb80def6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.006022  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.key ...
	I1123 10:17:28.006037  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.key: {Name:mk0d7a15230898871fde659685152c722e0134c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.006139  542668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb
	I1123 10:17:28.006162  542668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 10:17:28.495275  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb ...
	I1123 10:17:28.495310  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb: {Name:mk9c0218d8e1f341f93e84ebbed51df17ccf7c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.495500  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb ...
	I1123 10:17:28.495516  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb: {Name:mk804a0fcba3f7fe04e482e4cb9dad1ad68d5685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.495605  542668 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt.107479bb -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt
	I1123 10:17:28.495688  542668 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key.107479bb -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key
	I1123 10:17:28.495740  542668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key
	I1123 10:17:28.495761  542668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt with IP's: []
	I1123 10:17:28.739580  542668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt ...
	I1123 10:17:28.739612  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt: {Name:mk558cf10d22ce15b0080591ed282b80c13bbdd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.739790  542668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key ...
	I1123 10:17:28.739805  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key: {Name:mke5e2e9b48e9ee0c18861eaf2ee14facbbf43fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:28.740003  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:17:28.740049  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:28.740082  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:28.740146  542668 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:28.740726  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:28.759805  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:28.778320  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:28.795713  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:17:28.813217  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 10:17:28.831318  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:17:28.850313  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:28.871669  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:28.893294  542668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:28.912683  542668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:28.925780  542668 ssh_runner.go:195] Run: openssl version
	I1123 10:17:28.931919  542668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:28.940566  542668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:28.944342  542668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:28.944414  542668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:28.987766  542668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:28.996130  542668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:28.999576  542668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:17:28.999626  542668 kubeadm.go:401] StartCluster: {Name:addons-832672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-832672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:28.999705  542668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:28.999770  542668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:29.027671  542668 cri.go:89] found id: ""
	I1123 10:17:29.027747  542668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:29.035645  542668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:17:29.043264  542668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:17:29.043377  542668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:17:29.050983  542668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:17:29.051002  542668 kubeadm.go:158] found existing configuration files:
	
	I1123 10:17:29.051050  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:17:29.058665  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:17:29.058735  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:17:29.065830  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:17:29.073325  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:17:29.073394  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:17:29.080913  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:17:29.088450  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:17:29.088515  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:17:29.095639  542668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:17:29.103086  542668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:17:29.103151  542668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:17:29.110563  542668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:17:29.151250  542668 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:17:29.151477  542668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:17:29.173506  542668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:17:29.173585  542668 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:17:29.173623  542668 kubeadm.go:319] OS: Linux
	I1123 10:17:29.173674  542668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:17:29.173727  542668 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:17:29.173778  542668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:17:29.173830  542668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:17:29.173901  542668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:17:29.173956  542668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:17:29.174015  542668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:17:29.174065  542668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:17:29.174115  542668 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:17:29.246225  542668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:17:29.246429  542668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:17:29.246546  542668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:17:29.253991  542668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:17:29.261028  542668 out.go:252]   - Generating certificates and keys ...
	I1123 10:17:29.261195  542668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:17:29.261300  542668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:17:29.747544  542668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:17:30.247214  542668 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:17:30.735964  542668 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:17:31.857438  542668 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:17:32.166373  542668 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:17:32.166890  542668 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-832672 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 10:17:32.680842  542668 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:17:32.681307  542668 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-832672 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 10:17:33.340281  542668 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:17:34.183295  542668 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:17:34.360724  542668 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:17:34.361067  542668 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:17:34.531519  542668 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:17:35.266773  542668 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:17:36.116615  542668 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:17:37.310420  542668 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:17:37.512143  542668 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:17:37.512798  542668 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:17:37.515612  542668 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:17:37.519223  542668 out.go:252]   - Booting up control plane ...
	I1123 10:17:37.519337  542668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:17:37.519422  542668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:17:37.519497  542668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:17:37.534025  542668 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:17:37.534345  542668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:17:37.543651  542668 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:17:37.543966  542668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:17:37.544019  542668 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:17:37.669504  542668 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:17:37.669629  542668 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:17:39.671973  542668 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001959236s
	I1123 10:17:39.675513  542668 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:17:39.675919  542668 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 10:17:39.676243  542668 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:17:39.676932  542668 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:17:42.348897  542668 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.671569154s
	I1123 10:17:44.193714  542668 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.516204487s
	I1123 10:17:45.678221  542668 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001553694s
	I1123 10:17:45.699613  542668 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:17:45.715340  542668 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:17:45.731186  542668 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:17:45.731457  542668 kubeadm.go:319] [mark-control-plane] Marking the node addons-832672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:17:45.743038  542668 kubeadm.go:319] [bootstrap-token] Using token: 8jeqce.9gmif7n048bp2h39
	I1123 10:17:45.746427  542668 out.go:252]   - Configuring RBAC rules ...
	I1123 10:17:45.746573  542668 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:17:45.752797  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:17:45.761578  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:17:45.765455  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:17:45.769267  542668 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:17:45.773842  542668 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:17:46.085729  542668 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:17:46.517158  542668 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:17:47.085205  542668 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:17:47.086459  542668 kubeadm.go:319] 
	I1123 10:17:47.086532  542668 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:17:47.086537  542668 kubeadm.go:319] 
	I1123 10:17:47.086616  542668 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:17:47.086620  542668 kubeadm.go:319] 
	I1123 10:17:47.086646  542668 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:17:47.086704  542668 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:17:47.086754  542668 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:17:47.086758  542668 kubeadm.go:319] 
	I1123 10:17:47.086824  542668 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:17:47.086828  542668 kubeadm.go:319] 
	I1123 10:17:47.086876  542668 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:17:47.086879  542668 kubeadm.go:319] 
	I1123 10:17:47.086940  542668 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:17:47.087016  542668 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:17:47.087090  542668 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:17:47.087094  542668 kubeadm.go:319] 
	I1123 10:17:47.087178  542668 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:17:47.087255  542668 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:17:47.087259  542668 kubeadm.go:319] 
	I1123 10:17:47.087344  542668 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8jeqce.9gmif7n048bp2h39 \
	I1123 10:17:47.087447  542668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 10:17:47.087467  542668 kubeadm.go:319] 	--control-plane 
	I1123 10:17:47.087472  542668 kubeadm.go:319] 
	I1123 10:17:47.087556  542668 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:17:47.087560  542668 kubeadm.go:319] 
	I1123 10:17:47.087641  542668 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8jeqce.9gmif7n048bp2h39 \
	I1123 10:17:47.087744  542668 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 10:17:47.090076  542668 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:17:47.090294  542668 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:17:47.090396  542668 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:17:47.090430  542668 cni.go:84] Creating CNI manager for ""
	I1123 10:17:47.090444  542668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:47.093715  542668 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:17:47.096490  542668 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:17:47.100338  542668 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:17:47.100362  542668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:17:47.112756  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:17:47.394220  542668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:17:47.394354  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:47.394434  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-832672 minikube.k8s.io/updated_at=2025_11_23T10_17_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=addons-832672 minikube.k8s.io/primary=true
	I1123 10:17:47.573613  542668 ops.go:34] apiserver oom_adj: -16
	I1123 10:17:47.573730  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:48.074627  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:48.574150  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:49.073800  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:49.574845  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:50.073907  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:50.574808  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:51.074497  542668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:51.161873  542668 kubeadm.go:1114] duration metric: took 3.76756513s to wait for elevateKubeSystemPrivileges
	I1123 10:17:51.161909  542668 kubeadm.go:403] duration metric: took 22.16228594s to StartCluster
	I1123 10:17:51.161927  542668 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:51.162050  542668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:17:51.162424  542668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:51.162639  542668 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:51.162781  542668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:17:51.163054  542668 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:51.163099  542668 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 10:17:51.163185  542668 addons.go:70] Setting yakd=true in profile "addons-832672"
	I1123 10:17:51.163205  542668 addons.go:239] Setting addon yakd=true in "addons-832672"
	I1123 10:17:51.163235  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.163747  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164449  542668 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-832672"
	I1123 10:17:51.164465  542668 addons.go:70] Setting cloud-spanner=true in profile "addons-832672"
	I1123 10:17:51.164478  542668 addons.go:70] Setting registry=true in profile "addons-832672"
	I1123 10:17:51.164484  542668 addons.go:239] Setting addon cloud-spanner=true in "addons-832672"
	I1123 10:17:51.164488  542668 addons.go:239] Setting addon registry=true in "addons-832672"
	I1123 10:17:51.164511  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.164518  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.164949  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164950  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164454  542668 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-832672"
	I1123 10:17:51.167479  542668 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-832672"
	I1123 10:17:51.167508  542668 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-832672"
	I1123 10:17:51.167530  542668 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-832672"
	I1123 10:17:51.167589  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.167850  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.169303  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.164470  542668 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-832672"
	I1123 10:17:51.172216  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.172787  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.183694  542668 addons.go:70] Setting volcano=true in profile "addons-832672"
	I1123 10:17:51.183773  542668 addons.go:239] Setting addon volcano=true in "addons-832672"
	I1123 10:17:51.183825  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.170236  542668 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-832672"
	I1123 10:17:51.184321  542668 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-832672"
	I1123 10:17:51.184343  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.184861  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.185184  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.194446  542668 addons.go:70] Setting volumesnapshots=true in profile "addons-832672"
	I1123 10:17:51.194537  542668 addons.go:239] Setting addon volumesnapshots=true in "addons-832672"
	I1123 10:17:51.194588  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.195104  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.167469  542668 addons.go:70] Setting storage-provisioner=true in profile "addons-832672"
	I1123 10:17:51.196676  542668 addons.go:239] Setting addon storage-provisioner=true in "addons-832672"
	I1123 10:17:51.196741  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.170259  542668 addons.go:70] Setting default-storageclass=true in profile "addons-832672"
	I1123 10:17:51.196848  542668 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-832672"
	I1123 10:17:51.197258  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.170266  542668 addons.go:70] Setting gcp-auth=true in profile "addons-832672"
	I1123 10:17:51.252711  542668 mustload.go:66] Loading cluster: addons-832672
	I1123 10:17:51.252985  542668 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:51.253317  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.170273  542668 addons.go:70] Setting ingress=true in profile "addons-832672"
	I1123 10:17:51.270221  542668 addons.go:239] Setting addon ingress=true in "addons-832672"
	I1123 10:17:51.270349  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.271043  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.276217  542668 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 10:17:51.285501  542668 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 10:17:51.285571  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 10:17:51.285676  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.294398  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 10:17:51.170280  542668 addons.go:70] Setting ingress-dns=true in profile "addons-832672"
	I1123 10:17:51.295645  542668 addons.go:239] Setting addon ingress-dns=true in "addons-832672"
	I1123 10:17:51.295695  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.296189  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.300042  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 10:17:51.303040  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 10:17:51.305894  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 10:17:51.308843  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 10:17:51.308954  542668 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 10:17:51.311651  542668 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 10:17:51.311674  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 10:17:51.311741  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.170286  542668 addons.go:70] Setting inspektor-gadget=true in profile "addons-832672"
	I1123 10:17:51.317351  542668 addons.go:239] Setting addon inspektor-gadget=true in "addons-832672"
	I1123 10:17:51.317433  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.317898  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.170292  542668 addons.go:70] Setting metrics-server=true in profile "addons-832672"
	I1123 10:17:51.328614  542668 addons.go:239] Setting addon metrics-server=true in "addons-832672"
	I1123 10:17:51.328660  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.329116  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.349911  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.167454  542668 addons.go:70] Setting registry-creds=true in profile "addons-832672"
	I1123 10:17:51.359647  542668 addons.go:239] Setting addon registry-creds=true in "addons-832672"
	I1123 10:17:51.170331  542668 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:51.252375  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.362710  542668 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-832672"
	I1123 10:17:51.362867  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.363321  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.369541  542668 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 10:17:51.369771  542668 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 10:17:51.373056  542668 addons.go:239] Setting addon default-storageclass=true in "addons-832672"
	I1123 10:17:51.373093  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.373766  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.377368  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.377853  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:51.384633  542668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:51.384821  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 10:17:51.388415  542668 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 10:17:51.388609  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:51.390094  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 10:17:51.390110  542668 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 10:17:51.390163  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	W1123 10:17:51.404364  542668 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 10:17:51.409060  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 10:17:51.411192  542668 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 10:17:51.411215  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 10:17:51.411278  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.411903  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 10:17:51.424883  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 10:17:51.427804  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 10:17:51.427836  542668 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 10:17:51.427917  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.430898  542668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:17:51.450321  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 10:17:51.450516  542668 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 10:17:51.513719  542668 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 10:17:51.498644  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.514667  542668 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 10:17:51.524312  542668 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 10:17:51.532165  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 10:17:51.533606  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.544421  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 10:17:51.544442  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 10:17:51.544520  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.563843  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 10:17:51.566879  542668 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 10:17:51.566900  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 10:17:51.566963  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.575192  542668 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 10:17:51.583875  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.588365  542668 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:51.592438  542668 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:51.592583  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.589219  542668 out.go:179]   - Using image docker.io/busybox:stable
	I1123 10:17:51.595153  542668 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 10:17:51.595173  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 10:17:51.595246  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.615564  542668 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 10:17:51.617903  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 10:17:51.617968  542668 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 10:17:51.618068  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.631481  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.633058  542668 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 10:17:51.633216  542668 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 10:17:51.633635  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 10:17:51.633707  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.646869  542668 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 10:17:51.649895  542668 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 10:17:51.649920  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 10:17:51.649997  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.663109  542668 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 10:17:51.663131  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 10:17:51.663192  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.673254  542668 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:51.673465  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.678161  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.680341  542668 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:51.680359  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:51.680420  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:51.725593  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.770974  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.771728  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.785082  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.815818  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.821693  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.832842  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.837687  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.840170  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:51.880861  542668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:52.083458  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 10:17:52.309345  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 10:17:52.382694  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 10:17:52.386815  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 10:17:52.502170  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 10:17:52.534870  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 10:17:52.653302  542668 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 10:17:52.653390  542668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 10:17:52.681477  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 10:17:52.681541  542668 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 10:17:52.686669  542668 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 10:17:52.686747  542668 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 10:17:52.690236  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 10:17:52.690307  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 10:17:52.696553  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 10:17:52.696621  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 10:17:52.726888  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:52.778191  542668 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 10:17:52.778267  542668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 10:17:52.781074  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 10:17:52.791251  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 10:17:52.791325  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 10:17:52.793607  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 10:17:52.793661  542668 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 10:17:52.796750  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 10:17:52.830701  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:52.861095  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 10:17:52.861173  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 10:17:52.864392  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 10:17:52.864461  542668 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 10:17:52.910023  542668 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 10:17:52.910094  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 10:17:52.930921  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 10:17:52.931002  542668 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 10:17:52.935385  542668 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 10:17:52.935457  542668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 10:17:52.936714  542668 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 10:17:52.936776  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 10:17:53.010849  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 10:17:53.010926  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 10:17:53.054548  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 10:17:53.069350  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 10:17:53.119402  542668 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:17:53.119425  542668 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 10:17:53.123891  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 10:17:53.123915  542668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 10:17:53.175747  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 10:17:53.175771  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 10:17:53.207491  542668 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 10:17:53.207518  542668 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 10:17:53.326953  542668 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.895955815s)
	I1123 10:17:53.326983  542668 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 10:17:53.327063  542668 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.446173781s)
	I1123 10:17:53.327841  542668 node_ready.go:35] waiting up to 6m0s for node "addons-832672" to be "Ready" ...
	I1123 10:17:53.365509  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:17:53.395377  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 10:17:53.395403  542668 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 10:17:53.521934  542668 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 10:17:53.521959  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 10:17:53.684176  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.600635505s)
	I1123 10:17:53.722875  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.413493637s)
	I1123 10:17:53.744682  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 10:17:53.755097  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 10:17:53.755119  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 10:17:53.831816  542668 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-832672" context rescaled to 1 replicas
	I1123 10:17:53.943632  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 10:17:53.943656  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 10:17:54.143213  542668 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 10:17:54.143241  542668 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 10:17:54.490004  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1123 10:17:55.335146  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:17:56.579843  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.197101942s)
	W1123 10:17:57.344303  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:17:57.543269  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.15641617s)
	I1123 10:17:57.543366  542668 addons.go:495] Verifying addon ingress=true in "addons-832672"
	I1123 10:17:57.543644  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.041384866s)
	I1123 10:17:57.543726  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.00878644s)
	I1123 10:17:57.543799  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.816839429s)
	I1123 10:17:57.544021  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.762889535s)
	I1123 10:17:57.544054  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.747248929s)
	I1123 10:17:57.544098  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.713332271s)
	I1123 10:17:57.544140  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.48950994s)
	I1123 10:17:57.544256  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.474883962s)
	I1123 10:17:57.544361  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.178825151s)
	I1123 10:17:57.544373  542668 addons.go:495] Verifying addon metrics-server=true in "addons-832672"
	I1123 10:17:57.544479  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.799767043s)
	W1123 10:17:57.544501  542668 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 10:17:57.544531  542668 retry.go:31] will retry after 269.020499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 10:17:57.544699  542668 addons.go:495] Verifying addon registry=true in "addons-832672"
	I1123 10:17:57.546881  542668 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-832672 service yakd-dashboard -n yakd-dashboard
	
	I1123 10:17:57.546998  542668 out.go:179] * Verifying ingress addon...
	I1123 10:17:57.550830  542668 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 10:17:57.551153  542668 out.go:179] * Verifying registry addon...
	I1123 10:17:57.555331  542668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 10:17:57.563592  542668 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 10:17:57.563613  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:57.576586  542668 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 10:17:57.576606  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:57.814509  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 10:17:57.829933  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.339881578s)
	I1123 10:17:57.830013  542668 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-832672"
	I1123 10:17:57.833139  542668 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 10:17:57.836746  542668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 10:17:57.858719  542668 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 10:17:57.858791  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:58.057315  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:58.059559  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:58.341079  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:58.555267  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:58.559527  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:58.840289  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:59.054985  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:59.058535  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:17:59.311719  542668 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 10:17:59.311818  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:59.330528  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:59.348363  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:17:59.446644  542668 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 10:17:59.459410  542668 addons.go:239] Setting addon gcp-auth=true in "addons-832672"
	I1123 10:17:59.459506  542668 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:17:59.459986  542668 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:17:59.476938  542668 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 10:17:59.476991  542668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:17:59.493664  542668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:17:59.553988  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:17:59.558554  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1123 10:17:59.831093  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:17:59.840089  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:00.055661  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:00.077880  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:00.354001  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:00.555268  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:00.558650  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:00.624998  542668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.810391235s)
	I1123 10:18:00.625042  542668 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.148078409s)
	I1123 10:18:00.628036  542668 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 10:18:00.630817  542668 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 10:18:00.633628  542668 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 10:18:00.633654  542668 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 10:18:00.648046  542668 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 10:18:00.648071  542668 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 10:18:00.664320  542668 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 10:18:00.664344  542668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 10:18:00.680192  542668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 10:18:00.841567  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:01.055661  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:01.087756  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:01.161618  542668 addons.go:495] Verifying addon gcp-auth=true in "addons-832672"
	I1123 10:18:01.165907  542668 out.go:179] * Verifying gcp-auth addon...
	I1123 10:18:01.169614  542668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 10:18:01.175821  542668 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 10:18:01.175846  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:01.344132  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:01.554514  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:01.557945  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:01.673111  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:01.831196  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:01.839817  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:02.053828  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:02.058654  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:02.173321  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:02.340793  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:02.554192  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:02.559998  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:02.672837  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:02.840359  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:03.054711  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:03.058396  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:03.173368  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:03.340498  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:03.554789  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:03.559524  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:03.673192  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:03.839768  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:04.053761  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:04.058588  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:04.173366  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:04.331812  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:04.340190  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:04.555113  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:04.559128  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:04.673338  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:04.840733  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:05.054769  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:05.058570  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:05.173241  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:05.339881  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:05.554202  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:05.558645  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:05.673049  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:05.839367  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:06.054898  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:06.059262  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:06.173031  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:06.339691  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:06.555103  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:06.559689  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:06.672852  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:06.830789  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:06.840295  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:07.054743  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:07.058565  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:07.173615  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:07.345917  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:07.553938  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:07.558544  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:07.673362  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:07.840143  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:08.054346  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:08.059436  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:08.173238  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:08.340175  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:08.554976  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:08.558505  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:08.672363  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:08.831115  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:08.839875  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:09.053872  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:09.058619  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:09.172473  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:09.341120  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:09.553806  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:09.558194  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:09.673171  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:09.840402  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:10.054758  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:10.058590  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:10.173436  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:10.340464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:10.554775  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:10.557977  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:10.673694  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:10.831553  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:10.840323  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:11.054779  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:11.058975  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:11.172787  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:11.341495  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:11.553844  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:11.559233  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:11.672858  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:11.839454  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:12.054929  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:12.058431  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:12.173075  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:12.339753  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:12.554997  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:12.560265  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:12.673487  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:12.840278  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:13.054388  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:13.058215  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:13.173121  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:13.330806  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:13.339762  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:13.554684  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:13.557859  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:13.672644  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:13.840096  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:14.054309  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:14.058162  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:14.172767  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:14.339609  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:14.554137  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:14.559445  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:14.673683  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:14.839880  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:15.054860  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:15.059874  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:15.172766  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:15.340427  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:15.554422  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:15.557729  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:15.672834  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:15.831517  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:15.840564  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:16.054779  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:16.059143  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:16.172908  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:16.340397  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:16.554624  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:16.557507  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:16.672454  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:16.839778  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:17.054144  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:17.059225  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:17.173287  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:17.340464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:17.554585  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:17.558002  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:17.672880  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:17.840207  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:18.054409  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:18.059254  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:18.172845  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:18.330541  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:18.340295  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:18.554393  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:18.557911  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:18.672664  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:18.839498  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:19.054643  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:19.057950  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:19.172501  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:19.340391  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:19.554958  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:19.558486  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:19.673314  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:19.839456  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:20.054644  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:20.059436  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:20.173571  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:20.331200  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:20.340695  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:20.555167  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:20.559632  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:20.672247  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:20.840347  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:21.054530  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:21.058138  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:21.172780  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:21.340500  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:21.553966  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:21.558499  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:21.673266  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:21.839494  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:22.054775  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:22.058507  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:22.173466  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:22.332063  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:22.341861  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:22.554068  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:22.558428  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:22.673017  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:22.849596  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:23.053714  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:23.058467  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:23.173587  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:23.341249  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:23.554257  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:23.559784  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:23.672749  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:23.840044  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:24.053971  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:24.058950  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:24.172827  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:24.341037  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:24.554423  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:24.557920  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:24.672627  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:24.831532  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:24.840171  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:25.054813  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:25.058957  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:25.172464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:25.345703  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:25.554000  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:25.558583  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:25.672731  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:25.839480  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:26.055065  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:26.059204  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:26.172863  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:26.341009  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:26.554223  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:26.558900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:26.672713  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:26.839820  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:27.054311  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:27.057960  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:27.172950  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:27.330766  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:27.340813  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:27.553843  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:27.558196  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:27.672907  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:27.839824  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:28.054159  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:28.059230  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:28.173193  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:28.340442  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:28.554538  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:28.557938  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:28.672546  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:28.840283  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:29.054543  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:29.058511  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:29.172375  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:29.331192  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:29.345900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:29.554636  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:29.557822  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:29.672553  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:29.840167  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:30.055744  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:30.060127  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:30.172959  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:30.346842  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:30.554117  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:30.558566  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:30.672450  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:30.840200  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:31.054542  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:31.059293  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:31.173243  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:31.331466  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:31.340535  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:31.554544  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:31.558006  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:31.672927  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:31.839900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:32.054280  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:32.058117  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:32.172927  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:32.339919  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:32.554713  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:32.557825  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:32.672585  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:32.839871  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:33.054365  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:33.059368  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:33.173372  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:33.339888  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:33.554059  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:33.558495  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:33.672400  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 10:18:33.831174  542668 node_ready.go:57] node "addons-832672" has "Ready":"False" status (will retry)
	I1123 10:18:33.839890  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:34.054349  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:34.058619  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:34.173479  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:34.339964  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:34.554184  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:34.559194  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:34.672937  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:34.840099  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:35.055167  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:35.060903  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:35.173097  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:35.361562  542668 node_ready.go:49] node "addons-832672" is "Ready"
	I1123 10:18:35.361613  542668 node_ready.go:38] duration metric: took 42.033740758s for node "addons-832672" to be "Ready" ...
	I1123 10:18:35.361629  542668 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:35.361687  542668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:35.378336  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:35.399772  542668 api_server.go:72] duration metric: took 44.237100442s to wait for apiserver process to appear ...
	I1123 10:18:35.399847  542668 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:35.399894  542668 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 10:18:35.420601  542668 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 10:18:35.435734  542668 api_server.go:141] control plane version: v1.34.1
	I1123 10:18:35.435761  542668 api_server.go:131] duration metric: took 35.880113ms to wait for apiserver health ...
	I1123 10:18:35.435770  542668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:18:35.471297  542668 system_pods.go:59] 19 kube-system pods found
	I1123 10:18:35.471381  542668 system_pods.go:61] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending
	I1123 10:18:35.471404  542668 system_pods.go:61] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending
	I1123 10:18:35.471424  542668 system_pods.go:61] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending
	I1123 10:18:35.471461  542668 system_pods.go:61] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending
	I1123 10:18:35.471484  542668 system_pods.go:61] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:35.471505  542668 system_pods.go:61] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:35.471540  542668 system_pods.go:61] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:35.471560  542668 system_pods.go:61] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:35.471580  542668 system_pods.go:61] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending
	I1123 10:18:35.471601  542668 system_pods.go:61] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:35.471631  542668 system_pods.go:61] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:35.471654  542668 system_pods.go:61] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending
	I1123 10:18:35.471674  542668 system_pods.go:61] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending
	I1123 10:18:35.471693  542668 system_pods.go:61] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending
	I1123 10:18:35.471726  542668 system_pods.go:61] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending
	I1123 10:18:35.471743  542668 system_pods.go:61] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending
	I1123 10:18:35.471765  542668 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending
	I1123 10:18:35.471799  542668 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending
	I1123 10:18:35.471822  542668 system_pods.go:61] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending
	I1123 10:18:35.471843  542668 system_pods.go:74] duration metric: took 36.06738ms to wait for pod list to return data ...
	I1123 10:18:35.471879  542668 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:18:35.487089  542668 default_sa.go:45] found service account: "default"
	I1123 10:18:35.487161  542668 default_sa.go:55] duration metric: took 15.2597ms for default service account to be created ...
	I1123 10:18:35.487200  542668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:18:35.498814  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:35.498896  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending
	I1123 10:18:35.498916  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending
	I1123 10:18:35.498938  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending
	I1123 10:18:35.498974  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending
	I1123 10:18:35.498999  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:35.499020  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:35.499058  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:35.499081  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:35.499099  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending
	I1123 10:18:35.499133  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:35.499155  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:35.499173  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending
	I1123 10:18:35.499191  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending
	I1123 10:18:35.499221  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending
	I1123 10:18:35.499243  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending
	I1123 10:18:35.499262  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending
	I1123 10:18:35.499281  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending
	I1123 10:18:35.499318  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:35.499342  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending
	I1123 10:18:35.499389  542668 retry.go:31] will retry after 288.405129ms: missing components: kube-dns
	I1123 10:18:35.569732  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:35.574048  542668 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 10:18:35.574118  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:35.773898  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:35.813154  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:35.813238  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:18:35.813261  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending
	I1123 10:18:35.813297  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending
	I1123 10:18:35.813322  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending
	I1123 10:18:35.813345  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:35.813384  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:35.813434  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:35.813453  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:35.813490  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:35.813513  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:35.813535  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:35.813573  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:35.813594  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending
	I1123 10:18:35.813612  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending
	I1123 10:18:35.813631  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending
	I1123 10:18:35.813665  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending
	I1123 10:18:35.813685  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:35.813708  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:35.813742  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:18:35.813777  542668 retry.go:31] will retry after 369.032447ms: missing components: kube-dns
	I1123 10:18:35.841687  542668 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 10:18:35.841760  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:36.062766  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:36.063700  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:36.173264  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:36.275538  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:36.275620  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:18:36.275665  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 10:18:36.275686  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 10:18:36.275725  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 10:18:36.275749  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:36.275770  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:36.275810  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:36.275834  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:36.275856  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:36.275894  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:36.275918  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:36.275944  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:36.275983  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 10:18:36.276013  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 10:18:36.276036  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 10:18:36.276070  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 10:18:36.276095  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.276120  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.276159  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:18:36.276195  542668 retry.go:31] will retry after 345.412667ms: missing components: kube-dns
	I1123 10:18:36.374521  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:36.604070  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:36.604605  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:36.677320  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:36.679221  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:36.679292  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:18:36.679314  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 10:18:36.679351  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 10:18:36.679374  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 10:18:36.679393  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:36.679415  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:36.679446  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:36.679469  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:36.679492  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:36.679526  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:36.679549  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:36.679569  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:36.679609  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 10:18:36.679636  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 10:18:36.679657  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 10:18:36.679693  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 10:18:36.679719  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.679743  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:36.679780  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Running
	I1123 10:18:36.679815  542668 retry.go:31] will retry after 575.218512ms: missing components: kube-dns
	I1123 10:18:36.841347  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:37.054279  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:37.059036  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:37.175489  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:37.262167  542668 system_pods.go:86] 19 kube-system pods found
	I1123 10:18:37.262207  542668 system_pods.go:89] "coredns-66bc5c9577-zgvcr" [5b8ea744-7a0a-48e3-a890-c2656634855e] Running
	I1123 10:18:37.262219  542668 system_pods.go:89] "csi-hostpath-attacher-0" [46a71f85-06cf-4560-8832-a8796547d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 10:18:37.262226  542668 system_pods.go:89] "csi-hostpath-resizer-0" [4ea2e8dd-ac4c-4562-a1c5-012978172d94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 10:18:37.262236  542668 system_pods.go:89] "csi-hostpathplugin-sftm7" [93fca40b-9e18-428b-9075-e8579a9af896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 10:18:37.262296  542668 system_pods.go:89] "etcd-addons-832672" [ddd9f286-b9b1-4b31-8809-ecc2110251b5] Running
	I1123 10:18:37.262303  542668 system_pods.go:89] "kindnet-vqgnm" [52e87538-daf5-4128-b431-3a2304afb791] Running
	I1123 10:18:37.262308  542668 system_pods.go:89] "kube-apiserver-addons-832672" [f51514aa-afb0-4e51-9e35-171ea5ed295e] Running
	I1123 10:18:37.262316  542668 system_pods.go:89] "kube-controller-manager-addons-832672" [12f8df13-5066-493c-ab33-9808ed5215c1] Running
	I1123 10:18:37.262322  542668 system_pods.go:89] "kube-ingress-dns-minikube" [0da8c2c5-a754-4f2f-9d9f-2c84d0cd2552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 10:18:37.262326  542668 system_pods.go:89] "kube-proxy-snjbw" [e978b0db-5148-461a-ba0e-9898cfac1cad] Running
	I1123 10:18:37.262332  542668 system_pods.go:89] "kube-scheduler-addons-832672" [8dc688c2-022a-4edd-95a4-4beb8d7b89c0] Running
	I1123 10:18:37.262350  542668 system_pods.go:89] "metrics-server-85b7d694d7-lv5tb" [31e91e20-f318-49e6-8673-15ccdc558d4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 10:18:37.262360  542668 system_pods.go:89] "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 10:18:37.262366  542668 system_pods.go:89] "registry-6b586f9694-n64pf" [df9cfe48-9ace-4b6a-be94-daa1ef351110] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 10:18:37.262376  542668 system_pods.go:89] "registry-creds-764b6fb674-6hk8b" [07606125-1919-4ff2-87bc-9e190e894654] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 10:18:37.262384  542668 system_pods.go:89] "registry-proxy-g5zv2" [f7b955e2-8566-432d-a780-323106f2098e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 10:18:37.262394  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qfdfv" [7baadf00-3f9d-4474-a278-bf96de08f70e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:37.262400  542668 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsqmt" [54da5dfb-cd81-48d6-993a-d5dc773eb3d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 10:18:37.262407  542668 system_pods.go:89] "storage-provisioner" [81ca59eb-e957-4389-9ebc-2c9e901b0676] Running
	I1123 10:18:37.262415  542668 system_pods.go:126] duration metric: took 1.775192418s to wait for k8s-apps to be running ...
	I1123 10:18:37.262432  542668 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:18:37.262488  542668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:37.279223  542668 system_svc.go:56] duration metric: took 16.781757ms WaitForService to wait for kubelet
	I1123 10:18:37.279295  542668 kubeadm.go:587] duration metric: took 46.116626644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:18:37.279339  542668 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:18:37.283155  542668 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:18:37.283213  542668 node_conditions.go:123] node cpu capacity is 2
	I1123 10:18:37.283228  542668 node_conditions.go:105] duration metric: took 3.855869ms to run NodePressure ...
	I1123 10:18:37.283249  542668 start.go:242] waiting for startup goroutines ...
	I1123 10:18:37.360729  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:37.555062  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:37.559498  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:37.673624  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:37.841179  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:38.056049  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:38.059805  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:38.173031  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:38.340735  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:38.558054  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:38.561988  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:38.672959  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:38.840599  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:39.054756  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:39.058470  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:39.173240  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:39.340740  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:39.554864  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:39.558494  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:39.673171  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:39.842196  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:40.059335  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:40.059452  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:40.176230  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:40.340520  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:40.555774  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:40.559848  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:40.672935  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:40.867389  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:41.054241  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:41.058965  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:41.172844  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:41.339807  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:41.554461  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:41.558336  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:41.674256  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:41.852095  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:42.058274  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:42.059855  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:42.175424  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:42.351616  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:42.562336  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:42.567830  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:42.672663  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:42.851455  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:43.062123  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:43.063966  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:43.173871  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:43.341525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:43.555122  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:43.559138  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:43.673286  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:43.845084  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:44.064247  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:44.065425  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:44.173297  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:44.340807  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:44.566393  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:44.566592  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:44.678804  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:44.844252  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:45.057038  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:45.061455  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:45.178292  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:45.350456  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:45.555087  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:45.560084  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:45.684883  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:45.841350  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:46.055356  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:46.060810  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:46.185711  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:46.340993  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:46.553684  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:46.558059  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:46.673057  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:46.840120  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:47.054504  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:47.058857  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:47.173087  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:47.340668  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:47.554393  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:47.558019  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:47.673174  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:47.840407  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:48.055124  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:48.059751  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:48.173098  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:48.341316  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:48.554600  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:48.559476  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:48.673453  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:48.841428  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:49.054498  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:49.059099  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:49.173087  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:49.340240  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:49.554473  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:49.557869  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:49.673165  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:49.840148  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:50.054261  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:50.059292  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:50.173236  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:50.340377  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:50.554315  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:50.557779  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:50.673231  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:50.840701  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:51.054997  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:51.059180  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:51.173470  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:51.353525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:51.555326  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:51.557694  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:51.672722  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:51.842956  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:52.054983  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:52.059729  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:52.173633  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:52.341953  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:52.555581  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:52.559996  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:52.678284  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:52.841131  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:53.054346  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:53.058134  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:53.173033  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:53.341672  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:53.554965  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:53.560585  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:53.674097  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:53.840385  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:54.055516  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:54.058635  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:54.173083  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:54.340765  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:54.553961  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:54.558598  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:54.673525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:54.841651  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:55.055579  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:55.059562  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:55.173665  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:55.351604  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:55.555790  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:55.558168  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:55.674496  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:55.841951  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:56.054441  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:56.058710  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:56.172998  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:56.340353  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:56.554602  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:56.559295  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:56.674514  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:56.840851  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:57.054554  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:57.058693  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:57.173359  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:57.341941  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:57.553727  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:57.559350  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:57.673353  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:57.841393  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:58.055218  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:58.059095  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:58.172759  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:58.341276  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:58.556924  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:58.561779  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:58.673211  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:58.840581  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:59.055191  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:59.059184  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:59.173486  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:59.340760  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:18:59.554139  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:18:59.559992  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:18:59.673464  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:18:59.844731  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:00.057475  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:00.068243  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:00.181323  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:00.341737  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:00.555343  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:00.558731  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:00.673276  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:00.840695  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:01.055342  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:01.059523  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:01.174098  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:01.341388  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:01.555714  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:01.558708  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:01.673578  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:01.841569  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:02.055188  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:02.059435  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:02.173722  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:02.346558  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:02.554733  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:02.558910  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:02.673491  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:02.840576  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:03.054404  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:03.059878  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:03.173330  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:03.340903  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:03.554563  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:03.559338  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:03.673548  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:03.844174  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:04.054967  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:04.063477  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:04.173781  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:04.341109  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:04.554753  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:04.567486  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:04.672787  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:04.840366  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:05.054631  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:05.058462  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:05.173138  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:05.346902  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:05.554707  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:05.558368  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:05.673558  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:05.841282  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:06.054480  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:06.060646  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:06.172602  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:06.340560  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:06.554494  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:06.558018  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:06.678691  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:06.839777  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:07.053870  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:07.058912  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:07.173131  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:07.340278  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:07.557208  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:07.559547  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:07.672564  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:07.840696  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:08.054385  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:08.058981  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:08.173112  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:08.342131  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:08.555702  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:08.559329  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:08.675718  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:08.841500  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:09.055622  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:09.060049  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:09.173831  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:09.340497  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:09.553734  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:09.558289  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:09.674232  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:09.841018  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:10.054427  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:10.058861  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:10.174050  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:10.342614  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:10.554685  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:10.558268  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:10.673710  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:10.840750  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:11.061953  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:11.067283  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:11.173852  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:11.342152  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:11.554401  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:11.557683  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:11.672448  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:11.840505  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:12.054811  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:12.059871  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:12.173086  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:12.340913  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:12.554364  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:12.559272  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:12.673874  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:12.841226  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:13.055100  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:13.059280  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:13.173241  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:13.340475  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:13.554508  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:13.557913  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:13.672808  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:13.844036  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:14.055545  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:14.058903  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:14.173148  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:14.340838  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:14.554795  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:14.558968  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:14.673362  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:14.841225  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:15.055469  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:15.059358  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:15.174041  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:15.340535  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:15.554878  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:15.558762  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:15.681617  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:15.840987  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:16.055971  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:16.059599  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 10:19:16.174267  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:16.340979  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:16.555068  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:16.559147  542668 kapi.go:107] duration metric: took 1m19.0038185s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 10:19:16.672990  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:16.840832  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:17.055351  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:17.173898  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:17.340644  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:17.555909  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:17.672702  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:17.841052  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:18.054588  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:18.172463  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:18.341305  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:18.554948  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:18.673153  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:18.841082  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:19.054294  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:19.173493  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:19.355144  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:19.555467  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:19.675097  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:19.842500  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:20.055566  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:20.173963  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:20.343073  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:20.559095  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:20.673869  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:20.840838  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:21.054065  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:21.173589  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:21.341886  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:21.554184  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:21.672831  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:21.840334  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:22.055383  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:22.173564  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:22.350265  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:22.555107  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:22.673453  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:22.841879  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:23.054348  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:23.174826  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:23.342201  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:23.555207  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:23.673204  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 10:19:23.840874  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:24.053857  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:24.172898  542668 kapi.go:107] duration metric: took 1m23.00328581s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 10:19:24.176603  542668 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-832672 cluster.
	I1123 10:19:24.179906  542668 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 10:19:24.183279  542668 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 10:19:24.340683  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:24.554502  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:24.841241  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:25.055435  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:25.341197  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:25.554657  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:25.840824  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:26.055142  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:26.340761  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:26.555058  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:26.841752  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:27.054565  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:27.340161  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:27.554117  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:27.840339  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:28.054963  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:28.340525  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:28.555391  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:28.846399  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:29.055120  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:29.341033  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:29.554987  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:29.840445  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:30.078506  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:30.354912  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:30.555238  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:30.848856  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:31.054437  542668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 10:19:31.340967  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:31.554561  542668 kapi.go:107] duration metric: took 1m34.003732689s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 10:19:31.840812  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:32.340603  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:32.841212  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:33.341048  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:33.840900  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:34.340888  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:34.841887  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:35.341292  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:35.840198  542668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 10:19:36.342608  542668 kapi.go:107] duration metric: took 1m38.505861652s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 10:19:36.345734  542668 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner-rancher, ingress-dns, inspektor-gadget, registry-creds, storage-provisioner, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1123 10:19:36.348663  542668 addons.go:530] duration metric: took 1m45.18555693s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner-rancher ingress-dns inspektor-gadget registry-creds storage-provisioner cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1123 10:19:36.348724  542668 start.go:247] waiting for cluster config update ...
	I1123 10:19:36.348747  542668 start.go:256] writing updated cluster config ...
	I1123 10:19:36.349059  542668 ssh_runner.go:195] Run: rm -f paused
	I1123 10:19:36.353682  542668 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:19:36.357223  542668 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zgvcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.363464  542668 pod_ready.go:94] pod "coredns-66bc5c9577-zgvcr" is "Ready"
	I1123 10:19:36.363494  542668 pod_ready.go:86] duration metric: took 6.245229ms for pod "coredns-66bc5c9577-zgvcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.365843  542668 pod_ready.go:83] waiting for pod "etcd-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.370738  542668 pod_ready.go:94] pod "etcd-addons-832672" is "Ready"
	I1123 10:19:36.370769  542668 pod_ready.go:86] duration metric: took 4.896721ms for pod "etcd-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.373388  542668 pod_ready.go:83] waiting for pod "kube-apiserver-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.378410  542668 pod_ready.go:94] pod "kube-apiserver-addons-832672" is "Ready"
	I1123 10:19:36.378441  542668 pod_ready.go:86] duration metric: took 4.989342ms for pod "kube-apiserver-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.380911  542668 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.758545  542668 pod_ready.go:94] pod "kube-controller-manager-addons-832672" is "Ready"
	I1123 10:19:36.758577  542668 pod_ready.go:86] duration metric: took 377.638378ms for pod "kube-controller-manager-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:36.959008  542668 pod_ready.go:83] waiting for pod "kube-proxy-snjbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.357329  542668 pod_ready.go:94] pod "kube-proxy-snjbw" is "Ready"
	I1123 10:19:37.357357  542668 pod_ready.go:86] duration metric: took 398.321212ms for pod "kube-proxy-snjbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.557685  542668 pod_ready.go:83] waiting for pod "kube-scheduler-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.957611  542668 pod_ready.go:94] pod "kube-scheduler-addons-832672" is "Ready"
	I1123 10:19:37.957639  542668 pod_ready.go:86] duration metric: took 399.927816ms for pod "kube-scheduler-addons-832672" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:37.957653  542668 pod_ready.go:40] duration metric: took 1.603935065s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:19:38.018628  542668 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:19:38.021998  542668 out.go:179] * Done! kubectl is now configured to use "addons-832672" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.101399591Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f751c3794d6c3b2279a493b939ab7992eaeddd5cb3bd4b48f74ecede1942a862 UID:f1e6fcce-b41c-4d8a-9acf-bf6a8f5ec15c NetNS:/var/run/netns/aa58aff4-c472-42cc-8a38-358d4c101510 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079230}] Aliases:map[]}"
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.101987752Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.105129362Z" level=info msg="Ran pod sandbox f751c3794d6c3b2279a493b939ab7992eaeddd5cb3bd4b48f74ecede1942a862 with infra container: default/busybox/POD" id=816df50c-1cfb-4bfe-ad64-5b5098410a48 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.108119141Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3c3be42-d51b-4323-8638-52f72321f2fc name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.108355822Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a3c3be42-d51b-4323-8638-52f72321f2fc name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.108461497Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a3c3be42-d51b-4323-8638-52f72321f2fc name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.110938556Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1996940a-a71d-434f-977c-6939a23c4e82 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:19:39 addons-832672 crio[829]: time="2025-11-23T10:19:39.115272109Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.161147678Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1996940a-a71d-434f-977c-6939a23c4e82 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.162252581Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f2e4e15d-72eb-4808-bd9e-844b5b807e40 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.164774333Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=68ff1737-1ab6-461b-beb7-bf60cc72d7bf name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.170421055Z" level=info msg="Creating container: default/busybox/busybox" id=0d8326ae-479a-46a0-974e-fa76c394284a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.170675434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.177347927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.178031498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.197136857Z" level=info msg="Created container 801617573b1b016a1919ce4a2dde6838f28f7b90853655ac5482ea001caef543: default/busybox/busybox" id=0d8326ae-479a-46a0-974e-fa76c394284a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.198907091Z" level=info msg="Starting container: 801617573b1b016a1919ce4a2dde6838f28f7b90853655ac5482ea001caef543" id=ef16310a-3101-4339-9f74-0e3e15f89396 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:19:41 addons-832672 crio[829]: time="2025-11-23T10:19:41.200644757Z" level=info msg="Started container" PID=4987 containerID=801617573b1b016a1919ce4a2dde6838f28f7b90853655ac5482ea001caef543 description=default/busybox/busybox id=ef16310a-3101-4339-9f74-0e3e15f89396 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f751c3794d6c3b2279a493b939ab7992eaeddd5cb3bd4b48f74ecede1942a862
	Nov 23 10:19:46 addons-832672 crio[829]: time="2025-11-23T10:19:46.53104944Z" level=info msg="Removing container: dff4dda31a5fe1f201dc5093945d15162ced76f8778ca6f0c85405ef2d2d3ec2" id=5013f0d5-927f-46ae-96dc-ccc82828a93a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:19:46 addons-832672 crio[829]: time="2025-11-23T10:19:46.533547611Z" level=info msg="Error loading conmon cgroup of container dff4dda31a5fe1f201dc5093945d15162ced76f8778ca6f0c85405ef2d2d3ec2: cgroup deleted" id=5013f0d5-927f-46ae-96dc-ccc82828a93a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:19:46 addons-832672 crio[829]: time="2025-11-23T10:19:46.541837564Z" level=info msg="Removed container dff4dda31a5fe1f201dc5093945d15162ced76f8778ca6f0c85405ef2d2d3ec2: gcp-auth/gcp-auth-certs-create-rspn9/create" id=5013f0d5-927f-46ae-96dc-ccc82828a93a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:19:46 addons-832672 crio[829]: time="2025-11-23T10:19:46.544576443Z" level=info msg="Stopping pod sandbox: 52c19d427bd3be00e077d2322c2fac936440b682ac65e7b002970b681ffd2c85" id=38e4b740-b6e1-4a50-a141-2a9a6e35c24c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:19:46 addons-832672 crio[829]: time="2025-11-23T10:19:46.544631976Z" level=info msg="Stopped pod sandbox (already stopped): 52c19d427bd3be00e077d2322c2fac936440b682ac65e7b002970b681ffd2c85" id=38e4b740-b6e1-4a50-a141-2a9a6e35c24c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:19:46 addons-832672 crio[829]: time="2025-11-23T10:19:46.545265496Z" level=info msg="Removing pod sandbox: 52c19d427bd3be00e077d2322c2fac936440b682ac65e7b002970b681ffd2c85" id=1b703938-efd7-4985-8f2c-c1effbb91d50 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:19:46 addons-832672 crio[829]: time="2025-11-23T10:19:46.551230195Z" level=info msg="Removed pod sandbox: 52c19d427bd3be00e077d2322c2fac936440b682ac65e7b002970b681ffd2c85" id=1b703938-efd7-4985-8f2c-c1effbb91d50 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	801617573b1b0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   f751c3794d6c3       busybox                                    default
	0d6735cfc81cc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	876f80945af82       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          15 seconds ago       Running             csi-provisioner                          0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	413e66dc710ea       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            17 seconds ago       Running             liveness-probe                           0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	b6bfc4971a4ce       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           18 seconds ago       Running             hostpath                                 0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	17dbb9a4b0b58       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             19 seconds ago       Running             controller                               0                   05c11920b8f34       ingress-nginx-controller-6c8bf45fb-qfs8k   ingress-nginx
	eb231ca13f49f       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             19 seconds ago       Exited              patch                                    3                   aec688aa4054b       ingress-nginx-admission-patch-sjmvd        ingress-nginx
	aeb5fae68fa2c       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             25 seconds ago       Exited              patch                                    3                   23e39ed73567f       gcp-auth-certs-patch-kj727                 gcp-auth
	d65cd5cc34cdd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 26 seconds ago       Running             gcp-auth                                 0                   e688684a86d82       gcp-auth-78565c9fb4-mhfx4                  gcp-auth
	cd4980ae684bc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                29 seconds ago       Running             node-driver-registrar                    0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	0cab600cd36b7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            30 seconds ago       Running             gadget                                   0                   2c1383f948d9c       gadget-bh47b                               gadget
	6b8563d255a65       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           34 seconds ago       Running             registry                                 0                   edf1281727ec5       registry-6b586f9694-n64pf                  kube-system
	59a26ed66a88a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              36 seconds ago       Running             registry-proxy                           0                   8d333002cc260       registry-proxy-g5zv2                       kube-system
	fac52e5468f02       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      39 seconds ago       Running             volume-snapshot-controller               0                   767b7b2d2d658       snapshot-controller-7d9fbc56b8-qfdfv       kube-system
	749892c269a97       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              39 seconds ago       Running             yakd                                     0                   cfcbb8a67f3e4       yakd-dashboard-5ff678cb9-ljzns             yakd-dashboard
	bee261c58130a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   44 seconds ago       Running             csi-external-health-monitor-controller   0                   0381361b3bd5f       csi-hostpathplugin-sftm7                   kube-system
	d51d99bf7ad51       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   45 seconds ago       Exited              create                                   0                   8aaad2a8ef496       ingress-nginx-admission-create-rgg69       ingress-nginx
	13f3666d715eb       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     46 seconds ago       Running             nvidia-device-plugin-ctr                 0                   05cb677324305       nvidia-device-plugin-daemonset-jwlsr       kube-system
	240455e48d203       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             50 seconds ago       Running             csi-attacher                             0                   c1a74448032c6       csi-hostpath-attacher-0                    kube-system
	2d505f439d6fa       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              51 seconds ago       Running             csi-resizer                              0                   98629eb2d3035       csi-hostpath-resizer-0                     kube-system
	b161e83d129d4       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               52 seconds ago       Running             cloud-spanner-emulator                   0                   023756d13ce97       cloud-spanner-emulator-5bdddb765-5djk5     default
	c0e97eff7ee81       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               58 seconds ago       Running             minikube-ingress-dns                     0                   40bef269e183a       kube-ingress-dns-minikube                  kube-system
	6f755863005bd       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   34a775e28a934       local-path-provisioner-648f6765c9-cv5hq    local-path-storage
	9892343ca47ba       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   316ecb140276b       metrics-server-85b7d694d7-lv5tb            kube-system
	6a1f9c0d3e16f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   3f813c2af9eb1       snapshot-controller-7d9fbc56b8-qsqmt       kube-system
	3419ff6dcec28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   2dd4e3c9b1061       coredns-66bc5c9577-zgvcr                   kube-system
	c8a56a4ee027a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   1f4e043abcfba       storage-provisioner                        kube-system
	3ff8fcd0337f5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   42f4390f1b22c       kindnet-vqgnm                              kube-system
	1c6ce78b41089       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             About a minute ago   Running             kube-proxy                               0                   80abaedcdb8db       kube-proxy-snjbw                           kube-system
	fe381bc317e85       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   a68eae19352f1       etcd-addons-832672                         kube-system
	ed2ede976a893       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   d4a406d9d0cb9       kube-scheduler-addons-832672               kube-system
	e5d0f156a4b2a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   32b5944ce2bf0       kube-apiserver-addons-832672               kube-system
	3cc6c3e6832ed       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   f0c06f33634fb       kube-controller-manager-addons-832672      kube-system
	
	
	==> coredns [3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6] <==
	[INFO] 10.244.0.17:37905 - 54882 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106742s
	[INFO] 10.244.0.17:37905 - 56004 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001983406s
	[INFO] 10.244.0.17:37905 - 48175 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001990544s
	[INFO] 10.244.0.17:37905 - 63604 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116498s
	[INFO] 10.244.0.17:37905 - 11398 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000298507s
	[INFO] 10.244.0.17:52158 - 11919 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000269444s
	[INFO] 10.244.0.17:52158 - 11681 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000166419s
	[INFO] 10.244.0.17:57749 - 31268 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014461s
	[INFO] 10.244.0.17:57749 - 31006 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008604s
	[INFO] 10.244.0.17:53941 - 23977 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126361s
	[INFO] 10.244.0.17:53941 - 23763 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081207s
	[INFO] 10.244.0.17:59090 - 48086 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00182049s
	[INFO] 10.244.0.17:59090 - 48275 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001920873s
	[INFO] 10.244.0.17:55030 - 31097 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154701s
	[INFO] 10.244.0.17:55030 - 31261 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000309863s
	[INFO] 10.244.0.20:46507 - 60716 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162242s
	[INFO] 10.244.0.20:40698 - 45789 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164425s
	[INFO] 10.244.0.20:44824 - 49733 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141566s
	[INFO] 10.244.0.20:56809 - 50573 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106496s
	[INFO] 10.244.0.20:60416 - 12455 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136052s
	[INFO] 10.244.0.20:59129 - 50330 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013789s
	[INFO] 10.244.0.20:44799 - 10327 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003030839s
	[INFO] 10.244.0.20:47195 - 11896 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002253178s
	[INFO] 10.244.0.20:47121 - 29779 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00195239s
	[INFO] 10.244.0.20:48042 - 52449 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001928521s
	
	
	==> describe nodes <==
	Name:               addons-832672
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-832672
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=addons-832672
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_17_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-832672
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-832672"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:17:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-832672
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:19:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:19:49 +0000   Sun, 23 Nov 2025 10:17:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:19:49 +0000   Sun, 23 Nov 2025 10:17:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:19:49 +0000   Sun, 23 Nov 2025 10:17:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:19:49 +0000   Sun, 23 Nov 2025 10:18:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-832672
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                bc3244ed-cf09-446c-8c77-ecf98153f57e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-5djk5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  gadget                      gadget-bh47b                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  gcp-auth                    gcp-auth-78565c9fb4-mhfx4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-qfs8k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         113s
	  kube-system                 coredns-66bc5c9577-zgvcr                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     118s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 csi-hostpathplugin-sftm7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 etcd-addons-832672                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-vqgnm                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-addons-832672                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-addons-832672       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-snjbw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-addons-832672                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 metrics-server-85b7d694d7-lv5tb             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         114s
	  kube-system                 nvidia-device-plugin-daemonset-jwlsr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-6b586f9694-n64pf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 registry-creds-764b6fb674-6hk8b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 registry-proxy-g5zv2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 snapshot-controller-7d9fbc56b8-qfdfv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 snapshot-controller-7d9fbc56b8-qsqmt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  local-path-storage          local-path-provisioner-648f6765c9-cv5hq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ljzns              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 115s                   kube-proxy       
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x7 over 2m11s)  kubelet          Node addons-832672 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x7 over 2m11s)  kubelet          Node addons-832672 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x6 over 2m11s)  kubelet          Node addons-832672 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m4s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m4s                   kubelet          Node addons-832672 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m4s                   kubelet          Node addons-832672 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s                   kubelet          Node addons-832672 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m                     node-controller  Node addons-832672 event: Registered Node addons-832672 in Controller
	  Normal   NodeReady                75s                    kubelet          Node addons-832672 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	[ +29.685025] overlayfs: idmapped layers are currently not supported
	[Nov23 10:16] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8] <==
	{"level":"warn","ts":"2025-11-23T10:17:42.466074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.497862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.502818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.523627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.546473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.565767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.576651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.599457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.614153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.627931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.641600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.664480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.679424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.699541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.709791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.749935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.780235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.790582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:42.896848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:58.099181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:58.109744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.887052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.901090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.946385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:20.961863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d65cd5cc34cdd15c104c64e7e59cd6d9bdaea860b8594852cc5c94975a37f7eb] <==
	2025/11/23 10:19:23 GCP Auth Webhook started!
	2025/11/23 10:19:38 Ready to marshal response ...
	2025/11/23 10:19:38 Ready to write response ...
	2025/11/23 10:19:38 Ready to marshal response ...
	2025/11/23 10:19:38 Ready to write response ...
	2025/11/23 10:19:38 Ready to marshal response ...
	2025/11/23 10:19:38 Ready to write response ...
	
	
	==> kernel <==
	 10:19:50 up  3:02,  0 user,  load average: 3.03, 3.37, 3.40
	Linux addons-832672 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a] <==
	I1123 10:17:54.667025       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:54.667338       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:18:24.667054       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:18:24.667069       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:18:24.667283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:18:24.668372       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 10:18:26.167240       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:18:26.167263       1 metrics.go:72] Registering metrics
	I1123 10:18:26.167319       1 controller.go:711] "Syncing nftables rules"
	I1123 10:18:34.673488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:18:34.673526       1 main.go:301] handling current node
	I1123 10:18:44.667736       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:18:44.667766       1 main.go:301] handling current node
	I1123 10:18:54.666707       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:18:54.666735       1 main.go:301] handling current node
	I1123 10:19:04.666528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:19:04.666565       1 main.go:301] handling current node
	I1123 10:19:14.666769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:19:14.666848       1 main.go:301] handling current node
	I1123 10:19:24.667063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:19:24.667093       1 main.go:301] handling current node
	I1123 10:19:34.666716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:19:34.666756       1 main.go:301] handling current node
	I1123 10:19:44.666242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:19:44.666276       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3] <==
	W1123 10:17:58.091515       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1123 10:17:58.106737       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1123 10:18:01.030926       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.108.176"}
	W1123 10:18:20.887052       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 10:18:20.900986       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 10:18:20.946237       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 10:18:20.960570       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 10:18:35.365923       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.108.176:443: connect: connection refused
	E1123 10:18:35.366046       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.108.176:443: connect: connection refused" logger="UnhandledError"
	W1123 10:18:35.366901       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.108.176:443: connect: connection refused
	E1123 10:18:35.366983       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.108.176:443: connect: connection refused" logger="UnhandledError"
	W1123 10:18:35.449251       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.108.176:443: connect: connection refused
	E1123 10:18:35.449308       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.108.176:443: connect: connection refused" logger="UnhandledError"
	E1123 10:18:53.903518       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.17.130:443: connect: connection refused" logger="UnhandledError"
	W1123 10:18:53.903930       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 10:18:53.904222       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 10:18:53.904665       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.17.130:443: connect: connection refused" logger="UnhandledError"
	E1123 10:18:53.910249       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.17.130:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.17.130:443: connect: connection refused" logger="UnhandledError"
	I1123 10:18:54.047965       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 10:19:47.986944       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37188: use of closed network connection
	E1123 10:19:48.212658       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37230: use of closed network connection
	E1123 10:19:48.342011       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37240: use of closed network connection
	
	
	==> kube-controller-manager [3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20] <==
	I1123 10:17:50.916889       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:17:50.917966       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:17:50.917994       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:17:50.918029       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:17:50.918075       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:50.918119       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 10:17:50.918123       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:17:50.918143       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:17:50.918086       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:17:50.919484       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:17:50.919764       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:17:50.919813       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 10:17:50.919825       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:17:50.926777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:50.927790       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	E1123 10:18:20.880443       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 10:18:20.880595       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 10:18:20.880650       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 10:18:20.934741       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 10:18:20.939121       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 10:18:20.981649       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:18:21.040182       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:18:35.870936       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1123 10:18:50.993630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 10:18:51.048064       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b] <==
	I1123 10:17:54.402843       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:17:54.528348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:17:54.628971       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:17:54.629012       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 10:17:54.629086       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:17:54.780731       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:54.780784       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:17:54.788013       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:17:54.788308       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:17:54.788323       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:54.789636       1 config.go:200] "Starting service config controller"
	I1123 10:17:54.789645       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:17:54.789660       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:17:54.789664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:17:54.789675       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:17:54.789680       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:17:54.790291       1 config.go:309] "Starting node config controller"
	I1123 10:17:54.790297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:17:54.790303       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:17:54.892983       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:17:54.893024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:17:54.893060       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4] <==
	I1123 10:17:44.175845       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:17:44.175920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:17:44.181546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 10:17:44.184113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:17:44.184259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:17:44.184375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:17:44.184476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:17:44.184631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:17:44.187436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:17:44.187636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:17:44.187731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:17:44.187784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:17:44.187893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:17:44.187942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:17:44.187987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:17:44.188032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:17:44.188065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:17:44.188115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:17:44.188147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:17:44.188328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:17:44.188383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:17:45.029620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:17:45.047078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:17:45.373210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 10:17:47.175895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:19:26 addons-832672 kubelet[1277]: I1123 10:19:26.349073    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s25cd\" (UniqueName: \"kubernetes.io/projected/06c5f9b5-4344-4934-bef3-e2f2dc1aeb71-kube-api-access-s25cd\") pod \"06c5f9b5-4344-4934-bef3-e2f2dc1aeb71\" (UID: \"06c5f9b5-4344-4934-bef3-e2f2dc1aeb71\") "
	Nov 23 10:19:26 addons-832672 kubelet[1277]: I1123 10:19:26.357228    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c5f9b5-4344-4934-bef3-e2f2dc1aeb71-kube-api-access-s25cd" (OuterVolumeSpecName: "kube-api-access-s25cd") pod "06c5f9b5-4344-4934-bef3-e2f2dc1aeb71" (UID: "06c5f9b5-4344-4934-bef3-e2f2dc1aeb71"). InnerVolumeSpecName "kube-api-access-s25cd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 10:19:26 addons-832672 kubelet[1277]: I1123 10:19:26.450420    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s25cd\" (UniqueName: \"kubernetes.io/projected/06c5f9b5-4344-4934-bef3-e2f2dc1aeb71-kube-api-access-s25cd\") on node \"addons-832672\" DevicePath \"\""
	Nov 23 10:19:27 addons-832672 kubelet[1277]: I1123 10:19:27.139724    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23e39ed73567fbee3f2263ce0a9866f7d20ba36012e60cb3c3f1d8c45f7390c7"
	Nov 23 10:19:29 addons-832672 kubelet[1277]: I1123 10:19:29.509290    1277 scope.go:117] "RemoveContainer" containerID="5c2440566204e0b4de80e5c5f7e7394bb77bd8570865d646b61b15a3a01287db"
	Nov 23 10:19:31 addons-832672 kubelet[1277]: I1123 10:19:31.155564    1277 scope.go:117] "RemoveContainer" containerID="5c2440566204e0b4de80e5c5f7e7394bb77bd8570865d646b61b15a3a01287db"
	Nov 23 10:19:32 addons-832672 kubelet[1277]: I1123 10:19:32.207613    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-qfs8k" podStartSLOduration=74.682863397 podStartE2EDuration="1m35.207594636s" podCreationTimestamp="2025-11-23 10:17:57 +0000 UTC" firstStartedPulling="2025-11-23 10:19:09.715016443 +0000 UTC m=+83.355217310" lastFinishedPulling="2025-11-23 10:19:30.239747681 +0000 UTC m=+103.879948549" observedRunningTime="2025-11-23 10:19:31.217332715 +0000 UTC m=+104.857533591" watchObservedRunningTime="2025-11-23 10:19:32.207594636 +0000 UTC m=+105.847795504"
	Nov 23 10:19:32 addons-832672 kubelet[1277]: I1123 10:19:32.313281    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnpqk\" (UniqueName: \"kubernetes.io/projected/c70b468a-770e-4b3f-aafa-a3e4a89baab8-kube-api-access-nnpqk\") pod \"c70b468a-770e-4b3f-aafa-a3e4a89baab8\" (UID: \"c70b468a-770e-4b3f-aafa-a3e4a89baab8\") "
	Nov 23 10:19:32 addons-832672 kubelet[1277]: I1123 10:19:32.316344    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70b468a-770e-4b3f-aafa-a3e4a89baab8-kube-api-access-nnpqk" (OuterVolumeSpecName: "kube-api-access-nnpqk") pod "c70b468a-770e-4b3f-aafa-a3e4a89baab8" (UID: "c70b468a-770e-4b3f-aafa-a3e4a89baab8"). InnerVolumeSpecName "kube-api-access-nnpqk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 10:19:32 addons-832672 kubelet[1277]: I1123 10:19:32.413801    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nnpqk\" (UniqueName: \"kubernetes.io/projected/c70b468a-770e-4b3f-aafa-a3e4a89baab8-kube-api-access-nnpqk\") on node \"addons-832672\" DevicePath \"\""
	Nov 23 10:19:32 addons-832672 kubelet[1277]: I1123 10:19:32.671532    1277 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 23 10:19:32 addons-832672 kubelet[1277]: I1123 10:19:32.671586    1277 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 23 10:19:33 addons-832672 kubelet[1277]: I1123 10:19:33.186657    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec688aa4054bfd71fd6b97d803cde9e8f83efee521733500daa47b512f06271"
	Nov 23 10:19:36 addons-832672 kubelet[1277]: I1123 10:19:36.226717    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-sftm7" podStartSLOduration=1.9223172339999999 podStartE2EDuration="1m1.226698529s" podCreationTimestamp="2025-11-23 10:18:35 +0000 UTC" firstStartedPulling="2025-11-23 10:18:36.361048171 +0000 UTC m=+50.001249039" lastFinishedPulling="2025-11-23 10:19:35.665429458 +0000 UTC m=+109.305630334" observedRunningTime="2025-11-23 10:19:36.225793564 +0000 UTC m=+109.865994432" watchObservedRunningTime="2025-11-23 10:19:36.226698529 +0000 UTC m=+109.866899405"
	Nov 23 10:19:38 addons-832672 kubelet[1277]: I1123 10:19:38.512770    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b42514c7-29f5-4424-9ebb-a351688d7e25" path="/var/lib/kubelet/pods/b42514c7-29f5-4424-9ebb-a351688d7e25/volumes"
	Nov 23 10:19:38 addons-832672 kubelet[1277]: I1123 10:19:38.869649    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f1e6fcce-b41c-4d8a-9acf-bf6a8f5ec15c-gcp-creds\") pod \"busybox\" (UID: \"f1e6fcce-b41c-4d8a-9acf-bf6a8f5ec15c\") " pod="default/busybox"
	Nov 23 10:19:38 addons-832672 kubelet[1277]: I1123 10:19:38.869913    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f4fc\" (UniqueName: \"kubernetes.io/projected/f1e6fcce-b41c-4d8a-9acf-bf6a8f5ec15c-kube-api-access-4f4fc\") pod \"busybox\" (UID: \"f1e6fcce-b41c-4d8a-9acf-bf6a8f5ec15c\") " pod="default/busybox"
	Nov 23 10:19:39 addons-832672 kubelet[1277]: E1123 10:19:39.474673    1277 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 23 10:19:39 addons-832672 kubelet[1277]: E1123 10:19:39.474771    1277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07606125-1919-4ff2-87bc-9e190e894654-gcr-creds podName:07606125-1919-4ff2-87bc-9e190e894654 nodeName:}" failed. No retries permitted until 2025-11-23 10:20:43.47475275 +0000 UTC m=+177.114953618 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/07606125-1919-4ff2-87bc-9e190e894654-gcr-creds") pod "registry-creds-764b6fb674-6hk8b" (UID: "07606125-1919-4ff2-87bc-9e190e894654") : secret "registry-creds-gcr" not found
	Nov 23 10:19:41 addons-832672 kubelet[1277]: I1123 10:19:41.245866    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.190984824 podStartE2EDuration="3.245837486s" podCreationTimestamp="2025-11-23 10:19:38 +0000 UTC" firstStartedPulling="2025-11-23 10:19:39.10879692 +0000 UTC m=+112.748997788" lastFinishedPulling="2025-11-23 10:19:41.163649582 +0000 UTC m=+114.803850450" observedRunningTime="2025-11-23 10:19:41.244230709 +0000 UTC m=+114.884431586" watchObservedRunningTime="2025-11-23 10:19:41.245837486 +0000 UTC m=+114.886038354"
	Nov 23 10:19:46 addons-832672 kubelet[1277]: I1123 10:19:46.529263    1277 scope.go:117] "RemoveContainer" containerID="dff4dda31a5fe1f201dc5093945d15162ced76f8778ca6f0c85405ef2d2d3ec2"
	Nov 23 10:19:46 addons-832672 kubelet[1277]: E1123 10:19:46.617108    1277 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/64b3563a97c97420cb6e33883fc9b81506c2970f7d0b7f592b098f1220a8c813/diff" to get inode usage: stat /var/lib/containers/storage/overlay/64b3563a97c97420cb6e33883fc9b81506c2970f7d0b7f592b098f1220a8c813/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-kj727_06c5f9b5-4344-4934-bef3-e2f2dc1aeb71/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-kj727_06c5f9b5-4344-4934-bef3-e2f2dc1aeb71/patch/1.log: no such file or directory
	Nov 23 10:19:46 addons-832672 kubelet[1277]: E1123 10:19:46.649870    1277 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6b55c09bb0d77337bf77ee7fff7522a84d5ccebd28e21f60c98c99292b7b5920/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6b55c09bb0d77337bf77ee7fff7522a84d5ccebd28e21f60c98c99292b7b5920/diff: no such file or directory, extraDiskErr: <nil>
	Nov 23 10:19:46 addons-832672 kubelet[1277]: E1123 10:19:46.650120    1277 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/090f35201c04a9c22b5149ea6c17b06a84240ddd7622566d5cd43c2d5f62bc6a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/090f35201c04a9c22b5149ea6c17b06a84240ddd7622566d5cd43c2d5f62bc6a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-sjmvd_c70b468a-770e-4b3f-aafa-a3e4a89baab8/patch/1.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-sjmvd_c70b468a-770e-4b3f-aafa-a3e4a89baab8/patch/1.log: no such file or directory
	Nov 23 10:19:47 addons-832672 kubelet[1277]: E1123 10:19:47.987667    1277 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59510->127.0.0.1:35853: write tcp 127.0.0.1:59510->127.0.0.1:35853: write: broken pipe
	
	
	==> storage-provisioner [c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465] <==
	W1123 10:19:24.813112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:26.817366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:26.824375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:28.828200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:28.833730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:30.838599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:30.850833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:32.853785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:32.858851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:34.863151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:34.868579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:36.871885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:36.876750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:38.879323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:38.883857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:40.887064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:40.894272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:42.897332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:42.904306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:44.907029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:44.913622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:46.917554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:46.921723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:48.925111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:48.933040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-832672 -n addons-832672
helpers_test.go:269: (dbg) Run:  kubectl --context addons-832672 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-kj727 ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-832672 describe pod gcp-auth-certs-patch-kj727 ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-832672 describe pod gcp-auth-certs-patch-kj727 ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b: exit status 1 (87.75363ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-kj727" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-rgg69" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sjmvd" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6hk8b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-832672 describe pod gcp-auth-certs-patch-kj727 ingress-nginx-admission-create-rgg69 ingress-nginx-admission-patch-sjmvd registry-creds-764b6fb674-6hk8b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable headlamp --alsologtostderr -v=1: exit status 11 (263.595484ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:19:51.592793  549263 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:19:51.593863  549263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:51.593876  549263 out.go:374] Setting ErrFile to fd 2...
	I1123 10:19:51.593881  549263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:51.594357  549263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:19:51.594737  549263 mustload.go:66] Loading cluster: addons-832672
	I1123 10:19:51.595128  549263 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:51.595146  549263 addons.go:622] checking whether the cluster is paused
	I1123 10:19:51.595266  549263 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:51.595282  549263 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:19:51.595821  549263 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:19:51.612867  549263 ssh_runner.go:195] Run: systemctl --version
	I1123 10:19:51.612942  549263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:19:51.634925  549263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:19:51.741219  549263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:19:51.741336  549263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:19:51.773285  549263 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:19:51.773318  549263 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:19:51.773324  549263 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:19:51.773328  549263 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:19:51.773332  549263 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:19:51.773336  549263 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:19:51.773340  549263 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:19:51.773342  549263 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:19:51.773346  549263 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:19:51.773352  549263 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:19:51.773355  549263 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:19:51.773359  549263 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:19:51.773364  549263 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:19:51.773374  549263 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:19:51.773434  549263 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:19:51.773462  549263 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:19:51.773473  549263 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:19:51.773481  549263 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:19:51.773484  549263 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:19:51.773488  549263 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:19:51.773492  549263 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:19:51.773495  549263 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:19:51.773498  549263 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:19:51.773514  549263 cri.go:89] found id: ""
	I1123 10:19:51.773574  549263 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:51.788810  549263 out.go:203] 
	W1123 10:19:51.791752  549263 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:19:51.791780  549263 out.go:285] * 
	* 
	W1123 10:19:51.799066  549263 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:19:51.802212  549263 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-5djk5" [716c70ee-896c-47bb-8fc7-b7ca5f4accc4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003130512s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (279.092713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:21:22.015282  551288 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:21:22.016132  551288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:22.016286  551288 out.go:374] Setting ErrFile to fd 2...
	I1123 10:21:22.016319  551288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:22.016630  551288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:21:22.017000  551288 mustload.go:66] Loading cluster: addons-832672
	I1123 10:21:22.017564  551288 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:22.017613  551288 addons.go:622] checking whether the cluster is paused
	I1123 10:21:22.017757  551288 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:22.017796  551288 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:21:22.018510  551288 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:21:22.038157  551288 ssh_runner.go:195] Run: systemctl --version
	I1123 10:21:22.038216  551288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:21:22.055276  551288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:21:22.169494  551288 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:21:22.169581  551288 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:21:22.202734  551288 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:21:22.202795  551288 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:21:22.202825  551288 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:21:22.202843  551288 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:21:22.202867  551288 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:21:22.202889  551288 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:21:22.202910  551288 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:21:22.202931  551288 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:21:22.202960  551288 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:21:22.202985  551288 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:21:22.203005  551288 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:21:22.203026  551288 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:21:22.203054  551288 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:21:22.203072  551288 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:21:22.203091  551288 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:21:22.203124  551288 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:21:22.203151  551288 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:21:22.203172  551288 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:21:22.203205  551288 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:21:22.203232  551288 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:21:22.203256  551288 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:21:22.203276  551288 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:21:22.203294  551288 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:21:22.203314  551288 cri.go:89] found id: ""
	I1123 10:21:22.203383  551288 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:21:22.220310  551288 out.go:203] 
	W1123 10:21:22.223312  551288 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:21:22.223401  551288 out.go:285] * 
	* 
	W1123 10:21:22.230420  551288 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:21:22.233699  551288 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-832672 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-832672 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-832672 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [499b4e6a-625c-4baf-b0ab-ea5685466ce3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [499b4e6a-625c-4baf-b0ab-ea5685466ce3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [499b4e6a-625c-4baf-b0ab-ea5685466ce3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.007299812s
addons_test.go:967: (dbg) Run:  kubectl --context addons-832672 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 ssh "cat /opt/local-path-provisioner/pvc-158fdf5f-6f36-438b-8fb9-88aab27655a3_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-832672 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-832672 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.130332ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:21:15.715350  551176 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:21:15.716038  551176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:15.716049  551176 out.go:374] Setting ErrFile to fd 2...
	I1123 10:21:15.716084  551176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:15.716409  551176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:21:15.716757  551176 mustload.go:66] Loading cluster: addons-832672
	I1123 10:21:15.717240  551176 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:15.717261  551176 addons.go:622] checking whether the cluster is paused
	I1123 10:21:15.717385  551176 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:15.717403  551176 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:21:15.718142  551176 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:21:15.736082  551176 ssh_runner.go:195] Run: systemctl --version
	I1123 10:21:15.736190  551176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:21:15.760516  551176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:21:15.868448  551176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:21:15.868584  551176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:21:15.898947  551176 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:21:15.898975  551176 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:21:15.898981  551176 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:21:15.898986  551176 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:21:15.898990  551176 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:21:15.898993  551176 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:21:15.898996  551176 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:21:15.899000  551176 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:21:15.899003  551176 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:21:15.899011  551176 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:21:15.899014  551176 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:21:15.899017  551176 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:21:15.899021  551176 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:21:15.899024  551176 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:21:15.899027  551176 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:21:15.899036  551176 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:21:15.899043  551176 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:21:15.899047  551176 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:21:15.899050  551176 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:21:15.899053  551176 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:21:15.899058  551176 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:21:15.899061  551176 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:21:15.899065  551176 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:21:15.899072  551176 cri.go:89] found id: ""
	I1123 10:21:15.899125  551176 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:21:15.919762  551176 out.go:203] 
	W1123 10:21:15.922858  551176 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:21:15.922886  551176 out.go:285] * 
	* 
	W1123 10:21:15.931089  551176 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:21:15.934199  551176 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jwlsr" [48294ed1-4eb3-4682-89d2-2d349dda0df1] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003628316s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (260.57854ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:21:01.043359  550811 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:21:01.044233  550811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:01.044290  550811 out.go:374] Setting ErrFile to fd 2...
	I1123 10:21:01.044312  550811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:01.044625  550811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:21:01.044978  550811 mustload.go:66] Loading cluster: addons-832672
	I1123 10:21:01.045452  550811 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:01.045493  550811 addons.go:622] checking whether the cluster is paused
	I1123 10:21:01.045630  550811 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:01.045665  550811 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:21:01.046210  550811 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:21:01.065815  550811 ssh_runner.go:195] Run: systemctl --version
	I1123 10:21:01.065877  550811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:21:01.084212  550811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:21:01.188623  550811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:21:01.188708  550811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:21:01.227461  550811 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:21:01.227485  550811 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:21:01.227491  550811 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:21:01.227496  550811 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:21:01.227500  550811 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:21:01.227504  550811 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:21:01.227508  550811 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:21:01.227511  550811 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:21:01.227514  550811 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:21:01.227521  550811 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:21:01.227524  550811 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:21:01.227528  550811 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:21:01.227531  550811 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:21:01.227535  550811 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:21:01.227539  550811 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:21:01.227547  550811 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:21:01.227558  550811 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:21:01.227563  550811 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:21:01.227567  550811 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:21:01.227570  550811 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:21:01.227574  550811 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:21:01.227578  550811 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:21:01.227582  550811 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:21:01.227588  550811 cri.go:89] found id: ""
	I1123 10:21:01.227640  550811 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:21:01.243080  550811 out.go:203] 
	W1123 10:21:01.245834  550811 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:21:01.245865  550811 out.go:285] * 
	* 
	W1123 10:21:01.252978  550811 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:21:01.256095  550811 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ljzns" [2e0d0183-f64b-49bc-be57-8de8609f1775] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002943984s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-832672 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-832672 addons disable yakd --alsologtostderr -v=1: exit status 11 (279.034496ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:21:07.317954  550890 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:21:07.319137  550890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:07.319154  550890 out.go:374] Setting ErrFile to fd 2...
	I1123 10:21:07.319161  550890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:21:07.319473  550890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:21:07.319810  550890 mustload.go:66] Loading cluster: addons-832672
	I1123 10:21:07.320242  550890 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:07.320264  550890 addons.go:622] checking whether the cluster is paused
	I1123 10:21:07.320411  550890 config.go:182] Loaded profile config "addons-832672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:21:07.320432  550890 host.go:66] Checking if "addons-832672" exists ...
	I1123 10:21:07.320994  550890 cli_runner.go:164] Run: docker container inspect addons-832672 --format={{.State.Status}}
	I1123 10:21:07.345705  550890 ssh_runner.go:195] Run: systemctl --version
	I1123 10:21:07.345784  550890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-832672
	I1123 10:21:07.370563  550890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/addons-832672/id_rsa Username:docker}
	I1123 10:21:07.476301  550890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:21:07.476426  550890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:21:07.510012  550890 cri.go:89] found id: "0d6735cfc81cc4310a30c20c4b981f85566fab9fc09489f6f1a437395f1dfcb0"
	I1123 10:21:07.510044  550890 cri.go:89] found id: "876f80945af82719d8b01c59781639d2345f4b71e8d7fc86d375568da1a4cf87"
	I1123 10:21:07.510059  550890 cri.go:89] found id: "413e66dc710ea8a4519f5049aa8bb5c228d52cf8e9f827732323477d628528e4"
	I1123 10:21:07.510064  550890 cri.go:89] found id: "b6bfc4971a4ce93837b38c1eaaecf217f4ee6636e52da78a5de980f78bb0ab89"
	I1123 10:21:07.510072  550890 cri.go:89] found id: "cd4980ae684bc030d413047ce04996d1a830f74b4e60cf206a86daeca572dea2"
	I1123 10:21:07.510077  550890 cri.go:89] found id: "6b8563d255a6527db63844a572322c1aab99d308dd0bbfb19cdc0c5e2fc3140e"
	I1123 10:21:07.510081  550890 cri.go:89] found id: "59a26ed66a88a9487ad003897ac0a641153ec5938e988867de6cbf839f125334"
	I1123 10:21:07.510084  550890 cri.go:89] found id: "fac52e5468f028d615e1d3f95666a9423dd16afc64e1d08d5e5f9aef848a575b"
	I1123 10:21:07.510087  550890 cri.go:89] found id: "bee261c58130a69ce0276587961c4f22f614ddc2ca260adaf0bea34a0d165395"
	I1123 10:21:07.510094  550890 cri.go:89] found id: "13f3666d715ebd1dabb805fac178a14ef69428151d3fb2eb69403fcc7c3f1edb"
	I1123 10:21:07.510097  550890 cri.go:89] found id: "240455e48d2038e9af9486dc5afde4e8dfeeabbe84275b08a749689e64a21605"
	I1123 10:21:07.510101  550890 cri.go:89] found id: "2d505f439d6fa601f44108e05f6b80ba55085b463483cefe64d504071fb5b450"
	I1123 10:21:07.510104  550890 cri.go:89] found id: "c0e97eff7ee816a5be3431a55f5fcdb0df75a811400a67f9a8f7006524449ce4"
	I1123 10:21:07.510108  550890 cri.go:89] found id: "9892343ca47ba435b30e0c66dac5a42e6a30f11093cb2f4eba3047cdbcee5f28"
	I1123 10:21:07.510111  550890 cri.go:89] found id: "6a1f9c0d3e16f717c0d135b533908cd2509b04db5d4fea7adeefabdbdc1f6448"
	I1123 10:21:07.510124  550890 cri.go:89] found id: "3419ff6dcec28e1e2b64c598bb2d0fe79ba8b1688e25d71a9304b84fd76fd9b6"
	I1123 10:21:07.510134  550890 cri.go:89] found id: "c8a56a4ee027a10ff71d91cd17d02569a56d429f03e576851d31728127d32465"
	I1123 10:21:07.510148  550890 cri.go:89] found id: "3ff8fcd0337f594f78ecb97dcca4bbcdd390b52a330e9d2b7173421b50ab098a"
	I1123 10:21:07.510152  550890 cri.go:89] found id: "1c6ce78b41089ffc4e2927e7ddd711cef2c980d01390a84e55f5f9cbf405341b"
	I1123 10:21:07.510155  550890 cri.go:89] found id: "fe381bc317e85bfea3f0894cefdb8b43276b93a131b6974e6f19f080a2eecca8"
	I1123 10:21:07.510160  550890 cri.go:89] found id: "ed2ede976a8934335caaf790430d380a1ffee2b5a7f9caa831a196111576b1f4"
	I1123 10:21:07.510176  550890 cri.go:89] found id: "e5d0f156a4b2a157cfd048827c170e24547ee934c11666f00a9fbba1529d69e3"
	I1123 10:21:07.510179  550890 cri.go:89] found id: "3cc6c3e6832ed7712b597ab6408816e06476f637ba2f1d68c755a3114042eb20"
	I1123 10:21:07.510182  550890 cri.go:89] found id: ""
	I1123 10:21:07.510255  550890 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:21:07.525749  550890 out.go:203] 
	W1123 10:21:07.528622  550890 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:21:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:21:07.528659  550890 out.go:285] * 
	* 
	W1123 10:21:07.535897  550890 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:21:07.538708  550890 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-832672 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-336858 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-336858 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-lzr4g" [6a8d685d-48d4-42d7-91de-70cb1ca9e1a6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-336858 -n functional-336858
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-23 10:36:50.739363912 +0000 UTC m=+1216.599871521
functional_test.go:1645: (dbg) Run:  kubectl --context functional-336858 describe po hello-node-connect-7d85dfc575-lzr4g -n default
functional_test.go:1645: (dbg) kubectl --context functional-336858 describe po hello-node-connect-7d85dfc575-lzr4g -n default:
Name:             hello-node-connect-7d85dfc575-lzr4g
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-336858/192.168.49.2
Start Time:       Sun, 23 Nov 2025 10:26:50 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6z8cn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6z8cn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-lzr4g to functional-336858
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-336858 logs hello-node-connect-7d85dfc575-lzr4g -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-336858 logs hello-node-connect-7d85dfc575-lzr4g -n default: exit status 1 (96.980331ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-lzr4g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-336858 logs hello-node-connect-7d85dfc575-lzr4g -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-336858 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-lzr4g
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-336858/192.168.49.2
Start Time:       Sun, 23 Nov 2025 10:26:50 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6z8cn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6z8cn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-lzr4g to functional-336858
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-336858 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-336858 logs -l app=hello-node-connect: exit status 1 (92.921756ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-lzr4g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-336858 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-336858 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.222.45
IPs:                      10.107.222.45
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32471/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-336858
helpers_test.go:243: (dbg) docker inspect functional-336858:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bac597452216f820f37bf941ee9de2c3dd3a3b9b539b7c175d760cb28b6f7e6",
	        "Created": "2025-11-23T10:23:56.76749101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 557533,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:23:56.830971375Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/0bac597452216f820f37bf941ee9de2c3dd3a3b9b539b7c175d760cb28b6f7e6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bac597452216f820f37bf941ee9de2c3dd3a3b9b539b7c175d760cb28b6f7e6/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bac597452216f820f37bf941ee9de2c3dd3a3b9b539b7c175d760cb28b6f7e6/hosts",
	        "LogPath": "/var/lib/docker/containers/0bac597452216f820f37bf941ee9de2c3dd3a3b9b539b7c175d760cb28b6f7e6/0bac597452216f820f37bf941ee9de2c3dd3a3b9b539b7c175d760cb28b6f7e6-json.log",
	        "Name": "/functional-336858",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-336858:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-336858",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0bac597452216f820f37bf941ee9de2c3dd3a3b9b539b7c175d760cb28b6f7e6",
	                "LowerDir": "/var/lib/docker/overlay2/a198b7b3495bd5462a63cf82ec2a5d5f6acc0f484203374e58a81ea7ecf12ea7-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a198b7b3495bd5462a63cf82ec2a5d5f6acc0f484203374e58a81ea7ecf12ea7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a198b7b3495bd5462a63cf82ec2a5d5f6acc0f484203374e58a81ea7ecf12ea7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a198b7b3495bd5462a63cf82ec2a5d5f6acc0f484203374e58a81ea7ecf12ea7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-336858",
	                "Source": "/var/lib/docker/volumes/functional-336858/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-336858",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-336858",
	                "name.minikube.sigs.k8s.io": "functional-336858",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90d6d77bcbd30da5e634245a14ae4944ce0307e72175e54f56325309d5b2e635",
	            "SandboxKey": "/var/run/docker/netns/90d6d77bcbd3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-336858": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:57:cc:6e:c1:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c693f6e34559f1bbfa3ff9bc68fc415c39125d2814ab009b5606179096afc5ba",
	                    "EndpointID": "72de903210b5ad5026c465141baa31032c6f119d7361dfa8e697af7e48f49e25",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-336858",
	                        "0bac59745221"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-336858 -n functional-336858
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 logs -n 25: (1.47788229s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 23 Nov 25 10:25 UTC │ 23 Nov 25 10:25 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 23 Nov 25 10:25 UTC │ 23 Nov 25 10:25 UTC │
	│ ssh     │ functional-336858 ssh sudo crictl images                                                                 │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:25 UTC │ 23 Nov 25 10:25 UTC │
	│ ssh     │ functional-336858 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:25 UTC │ 23 Nov 25 10:25 UTC │
	│ ssh     │ functional-336858 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:25 UTC │                     │
	│ cache   │ functional-336858 cache reload                                                                           │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:25 UTC │ 23 Nov 25 10:25 UTC │
	│ ssh     │ functional-336858 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:25 UTC │ 23 Nov 25 10:26 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ kubectl │ functional-336858 kubectl -- --context functional-336858 get pods                                        │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ start   │ -p functional-336858 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ service │ invalid-svc -p functional-336858                                                                         │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │                     │
	│ ssh     │ functional-336858 ssh echo hello                                                                         │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ config  │ functional-336858 config unset cpus                                                                      │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ config  │ functional-336858 config get cpus                                                                        │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │                     │
	│ config  │ functional-336858 config set cpus 2                                                                      │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ config  │ functional-336858 config get cpus                                                                        │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ config  │ functional-336858 config unset cpus                                                                      │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ config  │ functional-336858 config get cpus                                                                        │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │                     │
	│ ssh     │ functional-336858 ssh cat /etc/hostname                                                                  │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ tunnel  │ functional-336858 tunnel --alsologtostderr                                                               │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │                     │
	│ tunnel  │ functional-336858 tunnel --alsologtostderr                                                               │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │                     │
	│ tunnel  │ functional-336858 tunnel --alsologtostderr                                                               │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │                     │
	│ addons  │ functional-336858 addons list                                                                            │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	│ addons  │ functional-336858 addons list -o json                                                                    │ functional-336858 │ jenkins │ v1.37.0 │ 23 Nov 25 10:26 UTC │ 23 Nov 25 10:26 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:26:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:26:00.939610  561840 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:26:00.939724  561840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:26:00.939728  561840 out.go:374] Setting ErrFile to fd 2...
	I1123 10:26:00.939731  561840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:26:00.939996  561840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:26:00.940387  561840 out.go:368] Setting JSON to false
	I1123 10:26:00.941296  561840 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11310,"bootTime":1763882251,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:26:00.941352  561840 start.go:143] virtualization:  
	I1123 10:26:00.945059  561840 out.go:179] * [functional-336858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:26:00.948076  561840 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:26:00.948276  561840 notify.go:221] Checking for updates...
	I1123 10:26:00.954355  561840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:26:00.957254  561840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:26:00.960131  561840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 10:26:00.963006  561840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:26:00.965909  561840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:26:00.969317  561840 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:26:00.969479  561840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:26:01.004445  561840 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:26:01.004582  561840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:26:01.066274  561840 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 10:26:01.055950017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:26:01.066374  561840 docker.go:319] overlay module found
	I1123 10:26:01.071266  561840 out.go:179] * Using the docker driver based on existing profile
	I1123 10:26:01.074205  561840 start.go:309] selected driver: docker
	I1123 10:26:01.074215  561840 start.go:927] validating driver "docker" against &{Name:functional-336858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-336858 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:26:01.074319  561840 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:26:01.074422  561840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:26:01.137523  561840 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 10:26:01.127041172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:26:01.138007  561840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:26:01.138033  561840 cni.go:84] Creating CNI manager for ""
	I1123 10:26:01.138093  561840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:26:01.138147  561840 start.go:353] cluster config:
	{Name:functional-336858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-336858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:26:01.143124  561840 out.go:179] * Starting "functional-336858" primary control-plane node in "functional-336858" cluster
	I1123 10:26:01.146040  561840 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:26:01.149065  561840 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:26:01.151837  561840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:26:01.151881  561840 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:26:01.151889  561840 cache.go:65] Caching tarball of preloaded images
	I1123 10:26:01.151897  561840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:26:01.151989  561840 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:26:01.151999  561840 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:26:01.152116  561840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/config.json ...
	I1123 10:26:01.173638  561840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:26:01.173649  561840 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:26:01.173672  561840 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:26:01.173716  561840 start.go:360] acquireMachinesLock for functional-336858: {Name:mkdefc21fea118aed6bbb9f995701debc09e4ac4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:26:01.173784  561840 start.go:364] duration metric: took 50.733µs to acquireMachinesLock for "functional-336858"
	I1123 10:26:01.173804  561840 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:26:01.173809  561840 fix.go:54] fixHost starting: 
	I1123 10:26:01.174088  561840 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
	I1123 10:26:01.192023  561840 fix.go:112] recreateIfNeeded on functional-336858: state=Running err=<nil>
	W1123 10:26:01.192043  561840 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:26:01.195493  561840 out.go:252] * Updating the running docker "functional-336858" container ...
	I1123 10:26:01.195522  561840 machine.go:94] provisionDockerMachine start ...
	I1123 10:26:01.195621  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:01.213902  561840 main.go:143] libmachine: Using SSH client type: native
	I1123 10:26:01.214225  561840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33521 <nil> <nil>}
	I1123 10:26:01.214232  561840 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:26:01.364970  561840 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-336858
	
	I1123 10:26:01.364985  561840 ubuntu.go:182] provisioning hostname "functional-336858"
	I1123 10:26:01.365047  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:01.382741  561840 main.go:143] libmachine: Using SSH client type: native
	I1123 10:26:01.383051  561840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33521 <nil> <nil>}
	I1123 10:26:01.383060  561840 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-336858 && echo "functional-336858" | sudo tee /etc/hostname
	I1123 10:26:01.542737  561840 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-336858
	
	I1123 10:26:01.542825  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:01.561503  561840 main.go:143] libmachine: Using SSH client type: native
	I1123 10:26:01.561809  561840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33521 <nil> <nil>}
	I1123 10:26:01.561823  561840 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-336858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-336858/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-336858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:26:01.713895  561840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:26:01.713911  561840 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 10:26:01.713930  561840 ubuntu.go:190] setting up certificates
	I1123 10:26:01.713939  561840 provision.go:84] configureAuth start
	I1123 10:26:01.713997  561840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-336858
	I1123 10:26:01.732254  561840 provision.go:143] copyHostCerts
	I1123 10:26:01.732315  561840 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 10:26:01.732327  561840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 10:26:01.732398  561840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 10:26:01.732503  561840 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 10:26:01.732507  561840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 10:26:01.732531  561840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 10:26:01.732585  561840 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 10:26:01.732589  561840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 10:26:01.732610  561840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 10:26:01.732662  561840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.functional-336858 san=[127.0.0.1 192.168.49.2 functional-336858 localhost minikube]
	I1123 10:26:01.964014  561840 provision.go:177] copyRemoteCerts
	I1123 10:26:01.964073  561840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:26:01.964117  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:01.985201  561840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:26:02.093866  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:26:02.111099  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:26:02.128998  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:26:02.147166  561840 provision.go:87] duration metric: took 433.205854ms to configureAuth
	I1123 10:26:02.147204  561840 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:26:02.147393  561840 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:26:02.147488  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:02.164501  561840 main.go:143] libmachine: Using SSH client type: native
	I1123 10:26:02.164825  561840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33521 <nil> <nil>}
	I1123 10:26:02.164840  561840 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:26:07.584178  561840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:26:07.584194  561840 machine.go:97] duration metric: took 6.388664894s to provisionDockerMachine
	I1123 10:26:07.584203  561840 start.go:293] postStartSetup for "functional-336858" (driver="docker")
	I1123 10:26:07.584213  561840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:26:07.584268  561840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:26:07.584317  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:07.602166  561840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:26:07.704892  561840 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:26:07.708108  561840 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:26:07.708125  561840 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:26:07.708135  561840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 10:26:07.708190  561840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 10:26:07.708264  561840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 10:26:07.708335  561840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/test/nested/copy/541900/hosts -> hosts in /etc/test/nested/copy/541900
	I1123 10:26:07.708381  561840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/541900
	I1123 10:26:07.715520  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 10:26:07.732806  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/test/nested/copy/541900/hosts --> /etc/test/nested/copy/541900/hosts (40 bytes)
	I1123 10:26:07.749496  561840 start.go:296] duration metric: took 165.27872ms for postStartSetup
	I1123 10:26:07.749565  561840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:26:07.749618  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:07.766681  561840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:26:07.870564  561840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:26:07.875665  561840 fix.go:56] duration metric: took 6.701847998s for fixHost
	I1123 10:26:07.875685  561840 start.go:83] releasing machines lock for "functional-336858", held for 6.701888434s
	I1123 10:26:07.875756  561840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-336858
	I1123 10:26:07.892425  561840 ssh_runner.go:195] Run: cat /version.json
	I1123 10:26:07.892469  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:07.892743  561840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:26:07.892801  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:07.919552  561840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:26:07.922113  561840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:26:08.021484  561840 ssh_runner.go:195] Run: systemctl --version
	I1123 10:26:08.113683  561840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:26:08.150517  561840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:26:08.154977  561840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:26:08.155049  561840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:26:08.163046  561840 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:26:08.163061  561840 start.go:496] detecting cgroup driver to use...
	I1123 10:26:08.163092  561840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:26:08.163139  561840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:26:08.178902  561840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:26:08.192169  561840 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:26:08.192221  561840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:26:08.207584  561840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:26:08.220892  561840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:26:08.366836  561840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:26:08.503666  561840 docker.go:234] disabling docker service ...
	I1123 10:26:08.503723  561840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:26:08.518578  561840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:26:08.531904  561840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:26:08.667140  561840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:26:08.805582  561840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:26:08.819116  561840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:26:08.833806  561840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:26:08.833860  561840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:26:08.842717  561840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:26:08.842789  561840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:26:08.851626  561840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:26:08.860322  561840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:26:08.869482  561840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:26:08.878262  561840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:26:08.887972  561840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:26:08.897076  561840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:26:08.906278  561840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:26:08.913870  561840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:26:08.921272  561840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:26:09.062163  561840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:26:09.334730  561840 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:26:09.334791  561840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:26:09.338643  561840 start.go:564] Will wait 60s for crictl version
	I1123 10:26:09.338695  561840 ssh_runner.go:195] Run: which crictl
	I1123 10:26:09.342294  561840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:26:09.370162  561840 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:26:09.370238  561840 ssh_runner.go:195] Run: crio --version
	I1123 10:26:09.399013  561840 ssh_runner.go:195] Run: crio --version
	I1123 10:26:09.430249  561840 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:26:09.433264  561840 cli_runner.go:164] Run: docker network inspect functional-336858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:26:09.449255  561840 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 10:26:09.456269  561840 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1123 10:26:09.459064  561840 kubeadm.go:884] updating cluster {Name:functional-336858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-336858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:26:09.459208  561840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:26:09.459277  561840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:26:09.492636  561840 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:26:09.492648  561840 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:26:09.492712  561840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:26:09.519711  561840 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:26:09.519723  561840 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:26:09.519729  561840 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1123 10:26:09.519822  561840 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-336858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-336858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:26:09.519902  561840 ssh_runner.go:195] Run: crio config
	I1123 10:26:09.572951  561840 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1123 10:26:09.572981  561840 cni.go:84] Creating CNI manager for ""
	I1123 10:26:09.572990  561840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:26:09.573002  561840 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:26:09.573023  561840 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-336858 NodeName:functional-336858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:26:09.573148  561840 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-336858"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:26:09.573215  561840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:26:09.580676  561840 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:26:09.580732  561840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:26:09.588261  561840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:26:09.601732  561840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:26:09.614290  561840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1123 10:26:09.627077  561840 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:26:09.630963  561840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:26:09.761366  561840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:26:09.774310  561840 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858 for IP: 192.168.49.2
	I1123 10:26:09.774321  561840 certs.go:195] generating shared ca certs ...
	I1123 10:26:09.774335  561840 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:26:09.774491  561840 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 10:26:09.774533  561840 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 10:26:09.774539  561840 certs.go:257] generating profile certs ...
	I1123 10:26:09.774618  561840 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.key
	I1123 10:26:09.774658  561840 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/apiserver.key.e0609646
	I1123 10:26:09.774697  561840 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/proxy-client.key
	I1123 10:26:09.774825  561840 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 10:26:09.774855  561840 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 10:26:09.774861  561840 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:26:09.774888  561840 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:26:09.774911  561840 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:26:09.774936  561840 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 10:26:09.774985  561840 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 10:26:09.775632  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:26:09.793419  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:26:09.810437  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:26:09.827674  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:26:09.845144  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:26:09.862565  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:26:09.881355  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:26:09.899795  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:26:09.917164  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:26:09.935067  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 10:26:09.953119  561840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 10:26:09.970675  561840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:26:09.984060  561840 ssh_runner.go:195] Run: openssl version
	I1123 10:26:09.990283  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:26:09.999453  561840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:26:10.004305  561840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:26:10.004375  561840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:26:10.062165  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:26:10.070979  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 10:26:10.079894  561840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 10:26:10.083858  561840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 10:26:10.083943  561840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 10:26:10.125761  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 10:26:10.134284  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 10:26:10.143072  561840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 10:26:10.146995  561840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 10:26:10.147057  561840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 10:26:10.188461  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:26:10.197149  561840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:26:10.202229  561840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:26:10.243457  561840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:26:10.284789  561840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:26:10.325588  561840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:26:10.366911  561840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:26:10.408101  561840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:26:10.449337  561840 kubeadm.go:401] StartCluster: {Name:functional-336858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-336858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:26:10.449453  561840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:26:10.449530  561840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:26:10.476926  561840 cri.go:89] found id: "a870bb74f5d9088a7dc67624d0282b85652e307f81782a5cdfc6d4334330872d"
	I1123 10:26:10.476938  561840 cri.go:89] found id: "23622c36a012da8ca9354b38f0eb13b294ffe4d1beb78a2e97e7c88a74c5e6d4"
	I1123 10:26:10.476942  561840 cri.go:89] found id: "eff73239c03e41d7d1d2a65223222b66f587e29f75277be0e28982f34b2965d7"
	I1123 10:26:10.476944  561840 cri.go:89] found id: "1c7a67543fa14f97832f00e29480e5a36b0722658496c04f4f042a97f58ddc1e"
	I1123 10:26:10.476947  561840 cri.go:89] found id: "d4485f458325de8d6cda5430ef4e306316e82648413581bba8e639c216b1385b"
	I1123 10:26:10.476950  561840 cri.go:89] found id: "0ac3b5817df6395c7e609bbc36a996e082f3f0d0fd6b02eda2e6738b5a678317"
	I1123 10:26:10.476952  561840 cri.go:89] found id: "fbdd0a8addc4962f786c482c7b7b7ba13a582b576ab30ec78a1ee8805b6a16fc"
	I1123 10:26:10.476954  561840 cri.go:89] found id: "655963cf301235d219a791face9144da9015f8dd5423110c5c123e02ef8d9d05"
	I1123 10:26:10.476956  561840 cri.go:89] found id: "8aad5b67a17341404e520420b6666e9ffed4652696045716ece483b9d2f01f73"
	I1123 10:26:10.476962  561840 cri.go:89] found id: "d92489d62c08df52fcd9facf6de108ec09ea87f474802521c5b836cf992b8cd5"
	I1123 10:26:10.476972  561840 cri.go:89] found id: "5e02e8ba362c22045f8a673e18e270a982bfc45c3c049e71a8ea79fe808cfe32"
	I1123 10:26:10.476984  561840 cri.go:89] found id: "61057ce9989d9803d3f0f1294ee12b5625556ab47ac5884022ed903b2927c4e9"
	I1123 10:26:10.476986  561840 cri.go:89] found id: "12d1ca3761276da176257faad68c45e9a6fad6397237988d4a4325e7ef763c87"
	I1123 10:26:10.476988  561840 cri.go:89] found id: "21f76348e583c86a128418bb9654335004fc89442665ff69b87eb2adb950ee2e"
	I1123 10:26:10.476990  561840 cri.go:89] found id: "79846b7fa3f08c87314b30b732c67d74212f2ffa01b7945f2d39f607fff3c60b"
	I1123 10:26:10.476994  561840 cri.go:89] found id: "3341b6269d0b0d9b72cc8e3f5868bc1f598cdda2f69131cc27544d3a90021564"
	I1123 10:26:10.476996  561840 cri.go:89] found id: ""
	I1123 10:26:10.477057  561840 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:26:10.488231  561840 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:26:10Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:26:10.488293  561840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:26:10.496019  561840 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:26:10.496029  561840 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:26:10.496079  561840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:26:10.503943  561840 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:26:10.504444  561840 kubeconfig.go:125] found "functional-336858" server: "https://192.168.49.2:8441"
	I1123 10:26:10.505828  561840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:26:10.514037  561840 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-23 10:24:02.201610198 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-23 10:26:09.623434512 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1123 10:26:10.514050  561840 kubeadm.go:1161] stopping kube-system containers ...
	I1123 10:26:10.514062  561840 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1123 10:26:10.514120  561840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:26:10.546331  561840 cri.go:89] found id: "a870bb74f5d9088a7dc67624d0282b85652e307f81782a5cdfc6d4334330872d"
	I1123 10:26:10.546356  561840 cri.go:89] found id: "23622c36a012da8ca9354b38f0eb13b294ffe4d1beb78a2e97e7c88a74c5e6d4"
	I1123 10:26:10.546359  561840 cri.go:89] found id: "eff73239c03e41d7d1d2a65223222b66f587e29f75277be0e28982f34b2965d7"
	I1123 10:26:10.546361  561840 cri.go:89] found id: "1c7a67543fa14f97832f00e29480e5a36b0722658496c04f4f042a97f58ddc1e"
	I1123 10:26:10.546363  561840 cri.go:89] found id: "d4485f458325de8d6cda5430ef4e306316e82648413581bba8e639c216b1385b"
	I1123 10:26:10.546367  561840 cri.go:89] found id: "0ac3b5817df6395c7e609bbc36a996e082f3f0d0fd6b02eda2e6738b5a678317"
	I1123 10:26:10.546369  561840 cri.go:89] found id: "fbdd0a8addc4962f786c482c7b7b7ba13a582b576ab30ec78a1ee8805b6a16fc"
	I1123 10:26:10.546371  561840 cri.go:89] found id: "655963cf301235d219a791face9144da9015f8dd5423110c5c123e02ef8d9d05"
	I1123 10:26:10.546373  561840 cri.go:89] found id: "8aad5b67a17341404e520420b6666e9ffed4652696045716ece483b9d2f01f73"
	I1123 10:26:10.546379  561840 cri.go:89] found id: "d92489d62c08df52fcd9facf6de108ec09ea87f474802521c5b836cf992b8cd5"
	I1123 10:26:10.546383  561840 cri.go:89] found id: "5e02e8ba362c22045f8a673e18e270a982bfc45c3c049e71a8ea79fe808cfe32"
	I1123 10:26:10.546385  561840 cri.go:89] found id: "61057ce9989d9803d3f0f1294ee12b5625556ab47ac5884022ed903b2927c4e9"
	I1123 10:26:10.546387  561840 cri.go:89] found id: "12d1ca3761276da176257faad68c45e9a6fad6397237988d4a4325e7ef763c87"
	I1123 10:26:10.546389  561840 cri.go:89] found id: "21f76348e583c86a128418bb9654335004fc89442665ff69b87eb2adb950ee2e"
	I1123 10:26:10.546391  561840 cri.go:89] found id: "79846b7fa3f08c87314b30b732c67d74212f2ffa01b7945f2d39f607fff3c60b"
	I1123 10:26:10.546395  561840 cri.go:89] found id: "3341b6269d0b0d9b72cc8e3f5868bc1f598cdda2f69131cc27544d3a90021564"
	I1123 10:26:10.546397  561840 cri.go:89] found id: ""
	I1123 10:26:10.546402  561840 cri.go:252] Stopping containers: [a870bb74f5d9088a7dc67624d0282b85652e307f81782a5cdfc6d4334330872d 23622c36a012da8ca9354b38f0eb13b294ffe4d1beb78a2e97e7c88a74c5e6d4 eff73239c03e41d7d1d2a65223222b66f587e29f75277be0e28982f34b2965d7 1c7a67543fa14f97832f00e29480e5a36b0722658496c04f4f042a97f58ddc1e d4485f458325de8d6cda5430ef4e306316e82648413581bba8e639c216b1385b 0ac3b5817df6395c7e609bbc36a996e082f3f0d0fd6b02eda2e6738b5a678317 fbdd0a8addc4962f786c482c7b7b7ba13a582b576ab30ec78a1ee8805b6a16fc 655963cf301235d219a791face9144da9015f8dd5423110c5c123e02ef8d9d05 8aad5b67a17341404e520420b6666e9ffed4652696045716ece483b9d2f01f73 d92489d62c08df52fcd9facf6de108ec09ea87f474802521c5b836cf992b8cd5 5e02e8ba362c22045f8a673e18e270a982bfc45c3c049e71a8ea79fe808cfe32 61057ce9989d9803d3f0f1294ee12b5625556ab47ac5884022ed903b2927c4e9 12d1ca3761276da176257faad68c45e9a6fad6397237988d4a4325e7ef763c87 21f76348e583c86a128418bb9654335004fc89442665ff69b87eb2adb950ee2e 79846b7fa3f08c87314b30b732c67d74212f2ffa0
1b7945f2d39f607fff3c60b 3341b6269d0b0d9b72cc8e3f5868bc1f598cdda2f69131cc27544d3a90021564]
	I1123 10:26:10.546473  561840 ssh_runner.go:195] Run: which crictl
	I1123 10:26:10.550359  561840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 a870bb74f5d9088a7dc67624d0282b85652e307f81782a5cdfc6d4334330872d 23622c36a012da8ca9354b38f0eb13b294ffe4d1beb78a2e97e7c88a74c5e6d4 eff73239c03e41d7d1d2a65223222b66f587e29f75277be0e28982f34b2965d7 1c7a67543fa14f97832f00e29480e5a36b0722658496c04f4f042a97f58ddc1e d4485f458325de8d6cda5430ef4e306316e82648413581bba8e639c216b1385b 0ac3b5817df6395c7e609bbc36a996e082f3f0d0fd6b02eda2e6738b5a678317 fbdd0a8addc4962f786c482c7b7b7ba13a582b576ab30ec78a1ee8805b6a16fc 655963cf301235d219a791face9144da9015f8dd5423110c5c123e02ef8d9d05 8aad5b67a17341404e520420b6666e9ffed4652696045716ece483b9d2f01f73 d92489d62c08df52fcd9facf6de108ec09ea87f474802521c5b836cf992b8cd5 5e02e8ba362c22045f8a673e18e270a982bfc45c3c049e71a8ea79fe808cfe32 61057ce9989d9803d3f0f1294ee12b5625556ab47ac5884022ed903b2927c4e9 12d1ca3761276da176257faad68c45e9a6fad6397237988d4a4325e7ef763c87 21f76348e583c86a128418bb9654335004fc89442665ff69b87eb2adb950ee2e 79846b
7fa3f08c87314b30b732c67d74212f2ffa01b7945f2d39f607fff3c60b 3341b6269d0b0d9b72cc8e3f5868bc1f598cdda2f69131cc27544d3a90021564
	I1123 10:26:10.653086  561840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1123 10:26:10.768426  561840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:26:10.776169  561840 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 23 10:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 23 10:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 23 10:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 23 10:24 /etc/kubernetes/scheduler.conf
	
	I1123 10:26:10.776226  561840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1123 10:26:10.784357  561840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1123 10:26:10.791989  561840 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:26:10.792064  561840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:26:10.800061  561840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1123 10:26:10.807690  561840 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:26:10.807749  561840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:26:10.814791  561840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1123 10:26:10.822377  561840 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:26:10.822437  561840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:26:10.829658  561840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:26:10.837197  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:26:10.886501  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:26:12.597551  561840 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.711013882s)
	I1123 10:26:12.597623  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:26:12.817945  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:26:12.870331  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:26:12.944360  561840 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:26:12.944430  561840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:26:13.445373  561840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:26:13.944940  561840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:26:13.964472  561840 api_server.go:72] duration metric: took 1.020112307s to wait for apiserver process to appear ...
	I1123 10:26:13.964485  561840 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:26:13.964511  561840 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 10:26:17.632754  561840 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:26:17.632777  561840 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:26:17.632791  561840 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 10:26:17.761575  561840 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:26:17.761594  561840 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:26:17.964933  561840 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 10:26:17.975807  561840 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:26:17.975826  561840 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:26:18.465459  561840 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 10:26:18.475606  561840 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:26:18.475621  561840 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:26:18.965287  561840 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 10:26:18.973484  561840 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1123 10:26:18.992626  561840 api_server.go:141] control plane version: v1.34.1
	I1123 10:26:18.992649  561840 api_server.go:131] duration metric: took 5.028158388s to wait for apiserver health ...
	I1123 10:26:18.992657  561840 cni.go:84] Creating CNI manager for ""
	I1123 10:26:18.992662  561840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:26:18.996292  561840 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:26:18.999298  561840 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:26:19.009890  561840 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:26:19.009900  561840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:26:19.026778  561840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:26:19.509438  561840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:26:19.513181  561840 system_pods.go:59] 8 kube-system pods found
	I1123 10:26:19.513204  561840 system_pods.go:61] "coredns-66bc5c9577-4gbjl" [39623064-9e28-43b1-83f3-3b7f94b44378] Running
	I1123 10:26:19.513213  561840 system_pods.go:61] "etcd-functional-336858" [d6846ea5-500a-4e67-82ae-4bea7a9be01b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:26:19.513216  561840 system_pods.go:61] "kindnet-j67tg" [0e11da0c-1107-4aad-bb4a-a121876d8631] Running
	I1123 10:26:19.513223  561840 system_pods.go:61] "kube-apiserver-functional-336858" [993732df-7453-44a6-9308-a942c484a786] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:26:19.513230  561840 system_pods.go:61] "kube-controller-manager-functional-336858" [0a3c638b-2089-48d8-b0a8-a11c93cddec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:26:19.513234  561840 system_pods.go:61] "kube-proxy-bcwmk" [3f1db530-4963-440f-8c1b-fe9b5d03e4b9] Running
	I1123 10:26:19.513239  561840 system_pods.go:61] "kube-scheduler-functional-336858" [6ec2febb-db93-4390-9801-ac92bce0c3d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:26:19.513241  561840 system_pods.go:61] "storage-provisioner" [82570423-f64c-49f5-9e58-0e177e082d33] Running
	I1123 10:26:19.513247  561840 system_pods.go:74] duration metric: took 3.797978ms to wait for pod list to return data ...
	I1123 10:26:19.513253  561840 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:26:19.516426  561840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:26:19.516455  561840 node_conditions.go:123] node cpu capacity is 2
	I1123 10:26:19.516466  561840 node_conditions.go:105] duration metric: took 3.210368ms to run NodePressure ...
	I1123 10:26:19.516529  561840 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:26:19.770536  561840 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1123 10:26:19.779419  561840 kubeadm.go:744] kubelet initialised
	I1123 10:26:19.779430  561840 kubeadm.go:745] duration metric: took 8.880848ms waiting for restarted kubelet to initialise ...
	I1123 10:26:19.779444  561840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:26:19.789403  561840 ops.go:34] apiserver oom_adj: -16
	I1123 10:26:19.789433  561840 kubeadm.go:602] duration metric: took 9.293397832s to restartPrimaryControlPlane
	I1123 10:26:19.789440  561840 kubeadm.go:403] duration metric: took 9.340113213s to StartCluster
	I1123 10:26:19.789455  561840 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:26:19.789520  561840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:26:19.790135  561840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:26:19.790347  561840 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:26:19.790678  561840 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:26:19.790737  561840 addons.go:70] Setting storage-provisioner=true in profile "functional-336858"
	I1123 10:26:19.790755  561840 addons.go:239] Setting addon storage-provisioner=true in "functional-336858"
	W1123 10:26:19.790760  561840 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:26:19.790780  561840 host.go:66] Checking if "functional-336858" exists ...
	I1123 10:26:19.791241  561840 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
	I1123 10:26:19.791527  561840 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:26:19.791589  561840 addons.go:70] Setting default-storageclass=true in profile "functional-336858"
	I1123 10:26:19.791600  561840 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-336858"
	I1123 10:26:19.791912  561840 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
	I1123 10:26:19.796449  561840 out.go:179] * Verifying Kubernetes components...
	I1123 10:26:19.800337  561840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:26:19.844742  561840 addons.go:239] Setting addon default-storageclass=true in "functional-336858"
	W1123 10:26:19.844754  561840 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:26:19.844778  561840 host.go:66] Checking if "functional-336858" exists ...
	I1123 10:26:19.845198  561840 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
	I1123 10:26:19.845333  561840 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:26:19.848293  561840 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:26:19.848305  561840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:26:19.848373  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:19.879714  561840 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:26:19.879728  561840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:26:19.879790  561840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:26:19.888268  561840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:26:19.914057  561840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:26:20.022035  561840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:26:20.047520  561840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:26:20.069560  561840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:26:20.815907  561840 node_ready.go:35] waiting up to 6m0s for node "functional-336858" to be "Ready" ...
	I1123 10:26:20.819008  561840 node_ready.go:49] node "functional-336858" is "Ready"
	I1123 10:26:20.819024  561840 node_ready.go:38] duration metric: took 3.09972ms for node "functional-336858" to be "Ready" ...
	I1123 10:26:20.819037  561840 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:26:20.819096  561840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:26:20.826647  561840 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:26:20.829671  561840 addons.go:530] duration metric: took 1.038988818s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:26:20.833261  561840 api_server.go:72] duration metric: took 1.042889838s to wait for apiserver process to appear ...
	I1123 10:26:20.833275  561840 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:26:20.833292  561840 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 10:26:20.842709  561840 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1123 10:26:20.843863  561840 api_server.go:141] control plane version: v1.34.1
	I1123 10:26:20.843877  561840 api_server.go:131] duration metric: took 10.596746ms to wait for apiserver health ...
	I1123 10:26:20.843884  561840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:26:20.847387  561840 system_pods.go:59] 8 kube-system pods found
	I1123 10:26:20.847403  561840 system_pods.go:61] "coredns-66bc5c9577-4gbjl" [39623064-9e28-43b1-83f3-3b7f94b44378] Running
	I1123 10:26:20.847411  561840 system_pods.go:61] "etcd-functional-336858" [d6846ea5-500a-4e67-82ae-4bea7a9be01b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:26:20.847414  561840 system_pods.go:61] "kindnet-j67tg" [0e11da0c-1107-4aad-bb4a-a121876d8631] Running
	I1123 10:26:20.847420  561840 system_pods.go:61] "kube-apiserver-functional-336858" [993732df-7453-44a6-9308-a942c484a786] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:26:20.847424  561840 system_pods.go:61] "kube-controller-manager-functional-336858" [0a3c638b-2089-48d8-b0a8-a11c93cddec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:26:20.847427  561840 system_pods.go:61] "kube-proxy-bcwmk" [3f1db530-4963-440f-8c1b-fe9b5d03e4b9] Running
	I1123 10:26:20.847432  561840 system_pods.go:61] "kube-scheduler-functional-336858" [6ec2febb-db93-4390-9801-ac92bce0c3d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:26:20.847435  561840 system_pods.go:61] "storage-provisioner" [82570423-f64c-49f5-9e58-0e177e082d33] Running
	I1123 10:26:20.847439  561840 system_pods.go:74] duration metric: took 3.551681ms to wait for pod list to return data ...
	I1123 10:26:20.847446  561840 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:26:20.849932  561840 default_sa.go:45] found service account: "default"
	I1123 10:26:20.849944  561840 default_sa.go:55] duration metric: took 2.49359ms for default service account to be created ...
	I1123 10:26:20.849951  561840 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:26:20.856037  561840 system_pods.go:86] 8 kube-system pods found
	I1123 10:26:20.856054  561840 system_pods.go:89] "coredns-66bc5c9577-4gbjl" [39623064-9e28-43b1-83f3-3b7f94b44378] Running
	I1123 10:26:20.856063  561840 system_pods.go:89] "etcd-functional-336858" [d6846ea5-500a-4e67-82ae-4bea7a9be01b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:26:20.856068  561840 system_pods.go:89] "kindnet-j67tg" [0e11da0c-1107-4aad-bb4a-a121876d8631] Running
	I1123 10:26:20.856074  561840 system_pods.go:89] "kube-apiserver-functional-336858" [993732df-7453-44a6-9308-a942c484a786] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:26:20.856080  561840 system_pods.go:89] "kube-controller-manager-functional-336858" [0a3c638b-2089-48d8-b0a8-a11c93cddec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:26:20.856083  561840 system_pods.go:89] "kube-proxy-bcwmk" [3f1db530-4963-440f-8c1b-fe9b5d03e4b9] Running
	I1123 10:26:20.856088  561840 system_pods.go:89] "kube-scheduler-functional-336858" [6ec2febb-db93-4390-9801-ac92bce0c3d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:26:20.856091  561840 system_pods.go:89] "storage-provisioner" [82570423-f64c-49f5-9e58-0e177e082d33] Running
	I1123 10:26:20.856098  561840 system_pods.go:126] duration metric: took 6.142216ms to wait for k8s-apps to be running ...
	I1123 10:26:20.856105  561840 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:26:20.856166  561840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:26:20.872306  561840 system_svc.go:56] duration metric: took 16.191763ms WaitForService to wait for kubelet
	I1123 10:26:20.872324  561840 kubeadm.go:587] duration metric: took 1.081956643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:26:20.872341  561840 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:26:20.875164  561840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:26:20.875178  561840 node_conditions.go:123] node cpu capacity is 2
	I1123 10:26:20.875188  561840 node_conditions.go:105] duration metric: took 2.84329ms to run NodePressure ...
	I1123 10:26:20.875208  561840 start.go:242] waiting for startup goroutines ...
	I1123 10:26:20.875214  561840 start.go:247] waiting for cluster config update ...
	I1123 10:26:20.875224  561840 start.go:256] writing updated cluster config ...
	I1123 10:26:20.875517  561840 ssh_runner.go:195] Run: rm -f paused
	I1123 10:26:20.879783  561840 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:26:20.883089  561840 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4gbjl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:20.933066  561840 pod_ready.go:94] pod "coredns-66bc5c9577-4gbjl" is "Ready"
	I1123 10:26:20.933083  561840 pod_ready.go:86] duration metric: took 49.972527ms for pod "coredns-66bc5c9577-4gbjl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:20.945352  561840 pod_ready.go:83] waiting for pod "etcd-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:26:22.950175  561840 pod_ready.go:104] pod "etcd-functional-336858" is not "Ready", error: <nil>
	I1123 10:26:23.450911  561840 pod_ready.go:94] pod "etcd-functional-336858" is "Ready"
	I1123 10:26:23.450924  561840 pod_ready.go:86] duration metric: took 2.505559184s for pod "etcd-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:23.453276  561840 pod_ready.go:83] waiting for pod "kube-apiserver-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:26:25.458633  561840 pod_ready.go:104] pod "kube-apiserver-functional-336858" is not "Ready", error: <nil>
	W1123 10:26:27.959436  561840 pod_ready.go:104] pod "kube-apiserver-functional-336858" is not "Ready", error: <nil>
	W1123 10:26:30.459058  561840 pod_ready.go:104] pod "kube-apiserver-functional-336858" is not "Ready", error: <nil>
	I1123 10:26:30.958949  561840 pod_ready.go:94] pod "kube-apiserver-functional-336858" is "Ready"
	I1123 10:26:30.958963  561840 pod_ready.go:86] duration metric: took 7.505675021s for pod "kube-apiserver-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:30.961313  561840 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:30.965745  561840 pod_ready.go:94] pod "kube-controller-manager-functional-336858" is "Ready"
	I1123 10:26:30.965759  561840 pod_ready.go:86] duration metric: took 4.434188ms for pod "kube-controller-manager-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:30.968172  561840 pod_ready.go:83] waiting for pod "kube-proxy-bcwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:30.972504  561840 pod_ready.go:94] pod "kube-proxy-bcwmk" is "Ready"
	I1123 10:26:30.972518  561840 pod_ready.go:86] duration metric: took 4.334519ms for pod "kube-proxy-bcwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:30.974862  561840 pod_ready.go:83] waiting for pod "kube-scheduler-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:31.557488  561840 pod_ready.go:94] pod "kube-scheduler-functional-336858" is "Ready"
	I1123 10:26:31.557502  561840 pod_ready.go:86] duration metric: took 582.629069ms for pod "kube-scheduler-functional-336858" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:26:31.557513  561840 pod_ready.go:40] duration metric: took 10.677710073s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:26:31.612795  561840 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:26:31.615937  561840 out.go:179] * Done! kubectl is now configured to use "functional-336858" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:27:06 functional-336858 crio[3677]: time="2025-11-23T10:27:06.062580866Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-4btwd Namespace:default ID:8abd3848e172952e0e767b29c813e33ab45294c7e547aa1edf1122787c1d6a35 UID:80ab325d-aaef-403d-bd10-148d2d008de7 NetNS:/var/run/netns/bec2e39e-c68a-4df8-8115-bfad50ec4efb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000da3a8}] Aliases:map[]}"
	Nov 23 10:27:06 functional-336858 crio[3677]: time="2025-11-23T10:27:06.062745331Z" level=info msg="Checking pod default_hello-node-75c85bcc94-4btwd for CNI network kindnet (type=ptp)"
	Nov 23 10:27:06 functional-336858 crio[3677]: time="2025-11-23T10:27:06.070091528Z" level=info msg="Ran pod sandbox 8abd3848e172952e0e767b29c813e33ab45294c7e547aa1edf1122787c1d6a35 with infra container: default/hello-node-75c85bcc94-4btwd/POD" id=bdb34c77-a7fd-40bb-8c5c-51cdf8a1b3a0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:27:06 functional-336858 crio[3677]: time="2025-11-23T10:27:06.071479068Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d086239e-bc74-47f5-a25b-a0cdf4a5a3b7 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.109493105Z" level=info msg="Stopping pod sandbox: 4f6ec69bec2484cf9d7169152f75be7ae4263fb65e7a37075acbbbc4e4976e1f" id=0af9f517-9db5-4ba7-831e-51932c15f9c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.109551001Z" level=info msg="Stopped pod sandbox (already stopped): 4f6ec69bec2484cf9d7169152f75be7ae4263fb65e7a37075acbbbc4e4976e1f" id=0af9f517-9db5-4ba7-831e-51932c15f9c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.110286716Z" level=info msg="Removing pod sandbox: 4f6ec69bec2484cf9d7169152f75be7ae4263fb65e7a37075acbbbc4e4976e1f" id=68f73874-ac99-4396-a076-09ce204381d5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.114864972Z" level=info msg="Removed pod sandbox: 4f6ec69bec2484cf9d7169152f75be7ae4263fb65e7a37075acbbbc4e4976e1f" id=68f73874-ac99-4396-a076-09ce204381d5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.128574492Z" level=info msg="Stopping pod sandbox: d7881cd1eb86648ca5b32fc3839fe1604685eb7165c677fdfdec2f0a6e3c369f" id=71789b14-01f9-40b8-9473-dcd368257033 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.128635309Z" level=info msg="Stopped pod sandbox (already stopped): d7881cd1eb86648ca5b32fc3839fe1604685eb7165c677fdfdec2f0a6e3c369f" id=71789b14-01f9-40b8-9473-dcd368257033 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.130687432Z" level=info msg="Removing pod sandbox: d7881cd1eb86648ca5b32fc3839fe1604685eb7165c677fdfdec2f0a6e3c369f" id=4e99c4f6-6fea-4aa1-8d31-7f4e4e129b2f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.134889954Z" level=info msg="Removed pod sandbox: d7881cd1eb86648ca5b32fc3839fe1604685eb7165c677fdfdec2f0a6e3c369f" id=4e99c4f6-6fea-4aa1-8d31-7f4e4e129b2f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.136835655Z" level=info msg="Stopping pod sandbox: 7792352bc4a67fc244633877aaa5cae8cc33183267afa550b4375305753049a0" id=e342b9fc-045c-4646-b4df-a95043871bb2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.136885256Z" level=info msg="Stopped pod sandbox (already stopped): 7792352bc4a67fc244633877aaa5cae8cc33183267afa550b4375305753049a0" id=e342b9fc-045c-4646-b4df-a95043871bb2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.139459003Z" level=info msg="Removing pod sandbox: 7792352bc4a67fc244633877aaa5cae8cc33183267afa550b4375305753049a0" id=03fb9981-5787-4b26-80c8-ecef22045779 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:27:13 functional-336858 crio[3677]: time="2025-11-23T10:27:13.143194096Z" level=info msg="Removed pod sandbox: 7792352bc4a67fc244633877aaa5cae8cc33183267afa550b4375305753049a0" id=03fb9981-5787-4b26-80c8-ecef22045779 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 10:27:18 functional-336858 crio[3677]: time="2025-11-23T10:27:18.983288247Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d7dafc7c-0c2f-4cf9-b11d-c3f138dc7d29 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:27:27 functional-336858 crio[3677]: time="2025-11-23T10:27:27.983358906Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b2838f20-4527-4f51-8106-ffe750f29da7 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:27:46 functional-336858 crio[3677]: time="2025-11-23T10:27:46.983356035Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=03cf41eb-d54f-4a43-b049-f6756747b203 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:28:19 functional-336858 crio[3677]: time="2025-11-23T10:28:19.983323875Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=32d08869-063d-4dad-b8af-10dfbb999ef5 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:28:27 functional-336858 crio[3677]: time="2025-11-23T10:28:27.983075236Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9e6430b5-70b2-4a5e-8113-f632635e610f name=/runtime.v1.ImageService/PullImage
	Nov 23 10:29:44 functional-336858 crio[3677]: time="2025-11-23T10:29:44.983607975Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1af7b298-50ea-4c5d-9479-a9482b1bb633 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:29:55 functional-336858 crio[3677]: time="2025-11-23T10:29:55.983083582Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=81f94abd-e277-4890-9938-aa5021890f0b name=/runtime.v1.ImageService/PullImage
	Nov 23 10:32:37 functional-336858 crio[3677]: time="2025-11-23T10:32:37.983418329Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=38027b13-97b1-471e-974b-fe6ee5167b96 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:32:44 functional-336858 crio[3677]: time="2025-11-23T10:32:44.983333987Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=27531402-4d46-40db-ac99-bafb2cb24f15 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b0adee71afa86       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   7670b95618bfc       sp-pod                                      default
	f7cab37bbfa21       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   4756863b3ec3f       nginx-svc                                   default
	180395a0c3720       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   b41185e81e3e3       kindnet-j67tg                               kube-system
	ac266b7db24d4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   a2416f54a0b1b       kube-proxy-bcwmk                            kube-system
	f4124d7699d2f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   28c06b73a5b0a       coredns-66bc5c9577-4gbjl                    kube-system
	eade8fc1abf07       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   02a6aaf4d250b       storage-provisioner                         kube-system
	70277a59f39b2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   35d737270a61d       kube-apiserver-functional-336858            kube-system
	48340c1078ee5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   1642f472a2673       kube-scheduler-functional-336858            kube-system
	db2edb98c28f1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   d613e418ac11a       kube-controller-manager-functional-336858   kube-system
	33a8d1eae8b48       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   bdc558185074c       etcd-functional-336858                      kube-system
	a870bb74f5d90       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   28c06b73a5b0a       coredns-66bc5c9577-4gbjl                    kube-system
	23622c36a012d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   02a6aaf4d250b       storage-provisioner                         kube-system
	eff73239c03e4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   d613e418ac11a       kube-controller-manager-functional-336858   kube-system
	d4485f458325d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   1642f472a2673       kube-scheduler-functional-336858            kube-system
	0ac3b5817df63       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   b41185e81e3e3       kindnet-j67tg                               kube-system
	fbdd0a8addc49       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   a2416f54a0b1b       kube-proxy-bcwmk                            kube-system
	655963cf30123       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   bdc558185074c       etcd-functional-336858                      kube-system
	
	
	==> coredns [a870bb74f5d9088a7dc67624d0282b85652e307f81782a5cdfc6d4334330872d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45564 - 61195 "HINFO IN 4920379524534818749.5145313665358945590. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030272177s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4124d7699d2f6f054cbeffc065ab32b0cf7d003b772e8f1390cb6934d6b1632] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36197 - 37451 "HINFO IN 8925136218873887426.8876007880723656651. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014773133s
	
	
	==> describe nodes <==
	Name:               functional-336858
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-336858
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=functional-336858
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_24_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:24:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-336858
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:36:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:35:30 +0000   Sun, 23 Nov 2025 10:24:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:35:30 +0000   Sun, 23 Nov 2025 10:24:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:35:30 +0000   Sun, 23 Nov 2025 10:24:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:35:30 +0000   Sun, 23 Nov 2025 10:25:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-336858
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                a6a48e63-73f2-4096-838c-a9825b4a3a33
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-4btwd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-lzr4g          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-4gbjl                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-336858                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-j67tg                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-336858             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-336858    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bcwmk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-336858             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-336858 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-336858 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-336858 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-336858 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-336858 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-336858 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-336858 event: Registered Node functional-336858 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-336858 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-336858 event: Registered Node functional-336858 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-336858 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-336858 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-336858 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-336858 event: Registered Node functional-336858 in Controller
	
	
	==> dmesg <==
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	[ +29.685025] overlayfs: idmapped layers are currently not supported
	[Nov23 10:16] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:17] overlayfs: idmapped layers are currently not supported
	[Nov23 10:23] overlayfs: idmapped layers are currently not supported
	[Nov23 10:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [33a8d1eae8b487abba641b4265e137dd3b9ac690fc60b30dc6553a4b01b9cba8] <==
	{"level":"warn","ts":"2025-11-23T10:26:15.950166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:15.982843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.028947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.050228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.133505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.150115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.198306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.214110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.248484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.278310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.313662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.361504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.371886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.407964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.435889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.469896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.490259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.515396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.558349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.593620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.607462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:26:16.708682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T10:36:14.905266Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1163}
	{"level":"info","ts":"2025-11-23T10:36:14.930315Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1163,"took":"24.619911ms","hash":3718005158,"current-db-size-bytes":3428352,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1544192,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-23T10:36:14.930376Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3718005158,"revision":1163,"compact-revision":-1}
	
	
	==> etcd [655963cf301235d219a791face9144da9015f8dd5423110c5c123e02ef8d9d05] <==
	{"level":"warn","ts":"2025-11-23T10:25:31.577042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:25:31.593627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:25:31.610896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:25:31.653107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:25:31.714160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:25:31.744059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:25:31.871996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T10:26:02.340113Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T10:26:02.340179Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-336858","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-23T10:26:02.340296Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T10:26:02.472627Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T10:26:02.474168Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T10:26:02.474235Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T10:26:02.474282Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T10:26:02.474292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T10:26:02.474262Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-23T10:26:02.474358Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T10:26:02.474364Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T10:26:02.474441Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T10:26:02.474481Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T10:26:02.474377Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T10:26:02.478319Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-23T10:26:02.478401Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T10:26:02.478456Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-23T10:26:02.478517Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-336858","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:36:52 up  3:19,  0 user,  load average: 0.30, 0.38, 1.44
	Linux functional-336858 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0ac3b5817df6395c7e609bbc36a996e082f3f0d0fd6b02eda2e6738b5a678317] <==
	I1123 10:25:28.961301       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:25:28.961539       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1123 10:25:28.961690       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:25:28.961702       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:25:28.961716       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:25:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:25:29.185685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:25:29.185766       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:25:29.185799       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:25:29.197220       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:25:33.017903       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:25:33.017929       1 metrics.go:72] Registering metrics
	I1123 10:25:33.017977       1 controller.go:711] "Syncing nftables rules"
	I1123 10:25:39.185820       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:25:39.185892       1 main.go:301] handling current node
	I1123 10:25:49.185180       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:25:49.185226       1 main.go:301] handling current node
	I1123 10:25:59.191431       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:25:59.191469       1 main.go:301] handling current node
	
	
	==> kindnet [180395a0c3720bcc7a68b4a50515109295092430493eaa942adaa52fef558b0e] <==
	I1123 10:34:48.669104       1 main.go:301] handling current node
	I1123 10:34:58.667495       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:34:58.667548       1 main.go:301] handling current node
	I1123 10:35:08.667148       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:35:08.667261       1 main.go:301] handling current node
	I1123 10:35:18.673515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:35:18.673621       1 main.go:301] handling current node
	I1123 10:35:28.668177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:35:28.668350       1 main.go:301] handling current node
	I1123 10:35:38.667982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:35:38.668024       1 main.go:301] handling current node
	I1123 10:35:48.669487       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:35:48.669522       1 main.go:301] handling current node
	I1123 10:35:58.667518       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:35:58.667551       1 main.go:301] handling current node
	I1123 10:36:08.672875       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:36:08.672912       1 main.go:301] handling current node
	I1123 10:36:18.666670       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:36:18.666780       1 main.go:301] handling current node
	I1123 10:36:28.672847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:36:28.672885       1 main.go:301] handling current node
	I1123 10:36:38.675067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:36:38.675168       1 main.go:301] handling current node
	I1123 10:36:48.666962       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 10:36:48.667065       1 main.go:301] handling current node
	
	
	==> kube-apiserver [70277a59f39b2e3c2f6b3fcc103b6e4051b8d3f9745835bc3cd29d2d9fcfc7c5] <==
	I1123 10:26:17.771023       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:26:17.771049       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:26:17.784664       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:26:17.795663       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:26:17.797035       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:26:17.797143       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:26:17.802307       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:26:17.802481       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:26:17.804937       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:26:17.823004       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1123 10:26:17.835443       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:26:18.046579       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:26:18.433263       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:26:19.502455       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:26:19.655944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:26:19.743462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:26:19.755445       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:26:34.923996       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.79.70"}
	I1123 10:26:34.939705       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:26:34.946230       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:26:40.703609       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.39.74"}
	I1123 10:26:50.219609       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:26:50.396244       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.222.45"}
	I1123 10:27:05.805738       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.255.235"}
	I1123 10:36:17.726398       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [db2edb98c28f1c0f86cb4bd4e4e1c854b877fa3669eba7070a9b074a700a08b7] <==
	I1123 10:26:20.995114       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:26:21.001468       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:26:21.002722       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:26:21.002850       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:26:21.005167       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:26:21.007256       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:26:21.012618       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:26:21.013825       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:26:21.013842       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:26:21.013946       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:26:21.016223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:26:21.016312       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:26:21.022592       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:26:21.022661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:26:21.025814       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:26:21.027085       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:26:21.027092       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:26:21.027285       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:26:21.027381       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-336858"
	I1123 10:26:21.027485       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:26:21.027499       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:26:21.027903       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:26:21.030369       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:26:21.030411       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:26:21.041529       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-controller-manager [eff73239c03e41d7d1d2a65223222b66f587e29f75277be0e28982f34b2965d7] <==
	I1123 10:25:36.480173       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:25:36.489396       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:25:36.492646       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:25:36.494764       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:25:36.495839       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:25:36.497977       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:25:36.498001       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:25:36.499111       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:25:36.500337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:25:36.501502       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:25:36.503805       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:25:36.505903       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:25:36.507141       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:25:36.507243       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:25:36.507322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-336858"
	I1123 10:25:36.507373       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:25:36.509229       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:25:36.510506       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:25:36.510605       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:25:36.510685       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:25:36.511146       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:25:36.511254       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:25:36.512340       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:25:36.514722       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:25:36.516085       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [ac266b7db24d493c6190aa03ca939a2b2f11d7e487717668474d66225d950067] <==
	I1123 10:26:18.634257       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:26:18.781711       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:26:18.884556       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:26:18.884681       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 10:26:18.884806       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:26:18.908136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:26:18.908257       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:26:18.912560       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:26:18.912908       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:26:18.913117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:26:18.914675       1 config.go:200] "Starting service config controller"
	I1123 10:26:18.914913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:26:18.914971       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:26:18.915000       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:26:18.915076       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:26:18.915109       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:26:18.916014       1 config.go:309] "Starting node config controller"
	I1123 10:26:18.916067       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:26:18.916099       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:26:19.016540       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:26:19.016574       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:26:19.016618       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fbdd0a8addc4962f786c482c7b7b7ba13a582b576ab30ec78a1ee8805b6a16fc] <==
	I1123 10:25:28.926746       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:25:29.082531       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:25:33.034864       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:25:33.034912       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 10:25:33.034982       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:25:33.281029       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:25:33.281080       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:25:33.308719       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:25:33.313812       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:25:33.313837       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:25:33.314910       1 config.go:200] "Starting service config controller"
	I1123 10:25:33.314921       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:25:33.325138       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:25:33.325163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:25:33.325190       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:25:33.325195       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:25:33.334226       1 config.go:309] "Starting node config controller"
	I1123 10:25:33.334249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:25:33.334264       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:25:33.415393       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:25:33.428444       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:25:33.428490       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [48340c1078ee5b18ead66dd9154f65a91150f75432dde8fa80a51b3342c0c64a] <==
	I1123 10:26:16.067451       1 serving.go:386] Generated self-signed cert in-memory
	I1123 10:26:18.598238       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:26:18.598340       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:26:18.614972       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:26:18.615385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:26:18.625553       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:26:18.615400       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:26:18.625631       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:26:18.615338       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 10:26:18.633374       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 10:26:18.615413       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:26:18.730198       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:26:18.730281       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:26:18.735940       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kube-scheduler [d4485f458325de8d6cda5430ef4e306316e82648413581bba8e639c216b1385b] <==
	I1123 10:25:30.640036       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:25:32.809042       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:25:32.809153       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:25:32.809191       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:25:32.809232       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:25:33.055726       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:25:33.055756       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:25:33.058148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:25:33.058228       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:25:33.060132       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:25:33.060244       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:25:33.165606       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:26:02.336698       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 10:26:02.336799       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 10:26:02.336810       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 10:26:02.336829       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:26:02.336951       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 10:26:02.336974       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 23 10:34:10 functional-336858 kubelet[3995]: E1123 10:34:10.983124    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:34:16 functional-336858 kubelet[3995]: E1123 10:34:16.984050    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:34:25 functional-336858 kubelet[3995]: E1123 10:34:25.982587    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:34:27 functional-336858 kubelet[3995]: E1123 10:34:27.982775    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:34:37 functional-336858 kubelet[3995]: E1123 10:34:37.982452    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:34:38 functional-336858 kubelet[3995]: E1123 10:34:38.983224    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:34:50 functional-336858 kubelet[3995]: E1123 10:34:50.982256    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:34:51 functional-336858 kubelet[3995]: E1123 10:34:51.982825    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:35:03 functional-336858 kubelet[3995]: E1123 10:35:03.982160    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:35:06 functional-336858 kubelet[3995]: E1123 10:35:06.982224    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:35:17 functional-336858 kubelet[3995]: E1123 10:35:17.983164    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:35:21 functional-336858 kubelet[3995]: E1123 10:35:21.982874    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:35:32 functional-336858 kubelet[3995]: E1123 10:35:32.983168    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:35:36 functional-336858 kubelet[3995]: E1123 10:35:36.982790    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:35:46 functional-336858 kubelet[3995]: E1123 10:35:46.983687    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:35:49 functional-336858 kubelet[3995]: E1123 10:35:49.982840    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:35:59 functional-336858 kubelet[3995]: E1123 10:35:59.982509    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:36:04 functional-336858 kubelet[3995]: E1123 10:36:04.982988    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:36:13 functional-336858 kubelet[3995]: E1123 10:36:13.983060    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:36:19 functional-336858 kubelet[3995]: E1123 10:36:19.982512    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:36:24 functional-336858 kubelet[3995]: E1123 10:36:24.982792    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:36:30 functional-336858 kubelet[3995]: E1123 10:36:30.983293    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:36:39 functional-336858 kubelet[3995]: E1123 10:36:39.982329    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	Nov 23 10:36:45 functional-336858 kubelet[3995]: E1123 10:36:45.982999    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4btwd" podUID="80ab325d-aaef-403d-bd10-148d2d008de7"
	Nov 23 10:36:51 functional-336858 kubelet[3995]: E1123 10:36:51.982492    3995 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lzr4g" podUID="6a8d685d-48d4-42d7-91de-70cb1ca9e1a6"
	
	
	==> storage-provisioner [23622c36a012da8ca9354b38f0eb13b294ffe4d1beb78a2e97e7c88a74c5e6d4] <==
	I1123 10:25:33.132443       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:25:33.156718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:25:33.187564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:36.649530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:40.910146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:44.509207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:47.562985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:50.585442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:50.590299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:25:50.590437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:25:50.590594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-336858_619e503b-3a56-426f-b0a2-7b219246faf1!
	I1123 10:25:50.591498       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5356610-b825-4cb6-b30e-12eeb7ff2a40", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-336858_619e503b-3a56-426f-b0a2-7b219246faf1 became leader
	W1123 10:25:50.595864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:50.605478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:25:50.690731       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-336858_619e503b-3a56-426f-b0a2-7b219246faf1!
	W1123 10:25:52.609068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:52.616258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:54.619759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:54.626805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:56.630107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:56.635198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:58.637744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:25:58.642417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:26:00.645629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:26:00.653248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eade8fc1abf07e5bfe61c94cdb0ebf344e3381b9a3f5613a2e25d596c696461c] <==
	W1123 10:36:28.878600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:30.881090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:30.886344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:32.889984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:32.894761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:34.897588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:34.904676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:36.908198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:36.912539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:38.915266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:38.919603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:40.922320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:40.926769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:42.929731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:42.936435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:44.939727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:44.944290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:46.946936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:46.951080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:48.954060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:48.960696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:50.963534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:50.967918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:52.971396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:36:52.976793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-336858 -n functional-336858
helpers_test.go:269: (dbg) Run:  kubectl --context functional-336858 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-4btwd hello-node-connect-7d85dfc575-lzr4g
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-336858 describe pod hello-node-75c85bcc94-4btwd hello-node-connect-7d85dfc575-lzr4g
helpers_test.go:290: (dbg) kubectl --context functional-336858 describe pod hello-node-75c85bcc94-4btwd hello-node-connect-7d85dfc575-lzr4g:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-4btwd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-336858/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 10:27:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dpm9r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dpm9r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-4btwd to functional-336858
	  Normal   Pulling    6m58s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m43s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m43s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-lzr4g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-336858/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 10:26:50 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6z8cn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6z8cn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-lzr4g to functional-336858
	  Normal   Pulling    7m9s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x42 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-336858 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-336858 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-4btwd" [80ab325d-aaef-403d-bd10-148d2d008de7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1123 10:27:22.721734  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:29:38.856321  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:06.563265  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:38.857103  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-336858 -n functional-336858
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-23 10:37:06.250133605 +0000 UTC m=+1232.110641214
functional_test.go:1460: (dbg) Run:  kubectl --context functional-336858 describe po hello-node-75c85bcc94-4btwd -n default
functional_test.go:1460: (dbg) kubectl --context functional-336858 describe po hello-node-75c85bcc94-4btwd -n default:
Name:             hello-node-75c85bcc94-4btwd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-336858/192.168.49.2
Start Time:       Sun, 23 Nov 2025 10:27:05 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dpm9r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dpm9r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-4btwd to functional-336858
Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-336858 logs hello-node-75c85bcc94-4btwd -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-336858 logs hello-node-75c85bcc94-4btwd -n default: exit status 1 (120.262368ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-4btwd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-336858 logs hello-node-75c85bcc94-4btwd -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 service --namespace=default --https --url hello-node: exit status 115 (510.926192ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32192
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-336858 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 service hello-node --url --format={{.IP}}: exit status 115 (532.027629ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-336858 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 service hello-node --url: exit status 115 (568.680444ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32192
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-336858 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32192
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image load --daemon kicbase/echo-server:functional-336858 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 image load --daemon kicbase/echo-server:functional-336858 --alsologtostderr: (2.003286243s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-336858" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image load --daemon kicbase/echo-server:functional-336858 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls
2025/11/23 10:37:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:461: expected "kicbase/echo-server:functional-336858" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-336858
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image load --daemon kicbase/echo-server:functional-336858 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-336858" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image save kicbase/echo-server:functional-336858 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1123 10:37:20.379020  569672 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:37:20.379938  569672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:37:20.379997  569672 out.go:374] Setting ErrFile to fd 2...
	I1123 10:37:20.380019  569672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:37:20.380356  569672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:37:20.381161  569672 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:37:20.381371  569672 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:37:20.382023  569672 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
	I1123 10:37:20.410783  569672 ssh_runner.go:195] Run: systemctl --version
	I1123 10:37:20.410948  569672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
	I1123 10:37:20.437225  569672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
	I1123 10:37:20.552724  569672 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1123 10:37:20.552859  569672 cache_images.go:255] Failed to load cached images for "functional-336858": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1123 10:37:20.552898  569672 cache_images.go:267] failed pushing to: functional-336858

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-336858
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image save --daemon kicbase/echo-server:functional-336858 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-336858
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-336858: exit status 1 (28.227345ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-336858

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-336858

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-577713 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-577713 --output=json --user=testUser: exit status 80 (2.462387496s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c1abc1c-95d7-420d-ae9f-737d580fadff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-577713 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"8da176ea-b8de-4aec-a58f-372d4eed1f12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T10:50:12Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"ac896f90-0723-46fe-93a5-aba0e2189a42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-577713 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.01s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-577713 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-577713 --output=json --user=testUser: exit status 80 (2.011764126s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"31fd692a-3add-40b9-a96c-7db305d9773f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-577713 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e30cb1fe-c7db-49b1-ab8a-175a6fbe8a39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T10:50:14Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"1f22258f-4557-4271-aeff-57b047be2841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-577713 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.01s)

                                                
                                    
x
+
TestPause/serial/Pause (7.32s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-851396 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-851396 --alsologtostderr -v=5: exit status 80 (2.134827816s)

                                                
                                                
-- stdout --
	* Pausing node pause-851396 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:11:36.342566  705297 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:11:36.346883  705297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:36.346951  705297 out.go:374] Setting ErrFile to fd 2...
	I1123 11:11:36.346976  705297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:36.347405  705297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:11:36.347787  705297 out.go:368] Setting JSON to false
	I1123 11:11:36.347872  705297 mustload.go:66] Loading cluster: pause-851396
	I1123 11:11:36.348829  705297 config.go:182] Loaded profile config "pause-851396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:11:36.351475  705297 cli_runner.go:164] Run: docker container inspect pause-851396 --format={{.State.Status}}
	I1123 11:11:36.404653  705297 host.go:66] Checking if "pause-851396" exists ...
	I1123 11:11:36.404964  705297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:11:36.559401  705297 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:64 SystemTime:2025-11-23 11:11:36.549749559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:11:36.561133  705297 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-851396 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 11:11:36.564709  705297 out.go:179] * Pausing node pause-851396 ... 
	I1123 11:11:36.568593  705297 host.go:66] Checking if "pause-851396" exists ...
	I1123 11:11:36.568921  705297 ssh_runner.go:195] Run: systemctl --version
	I1123 11:11:36.568965  705297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-851396
	I1123 11:11:36.609133  705297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/pause-851396/id_rsa Username:docker}
	I1123 11:11:36.814132  705297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:11:36.840561  705297 pause.go:52] kubelet running: true
	I1123 11:11:36.840667  705297 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:11:37.233584  705297 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:11:37.233663  705297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:11:37.345349  705297 cri.go:89] found id: "603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779"
	I1123 11:11:37.345448  705297 cri.go:89] found id: "8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09"
	I1123 11:11:37.345454  705297 cri.go:89] found id: "be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad"
	I1123 11:11:37.345458  705297 cri.go:89] found id: "9735bcb80c6e21f1f4da4e9d7d67ffe689415f26d9c3e928218be89122a2742a"
	I1123 11:11:37.345461  705297 cri.go:89] found id: "686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51"
	I1123 11:11:37.345465  705297 cri.go:89] found id: "b8eafc94b9395c4bfec93915f95f833d7d764c18b6dbef394cf3c7cd472463a3"
	I1123 11:11:37.345468  705297 cri.go:89] found id: "ea91a163f905eeba88b8ea7e3801829d87ba9f5d61e6bb1c910ee0a26e354d25"
	I1123 11:11:37.345471  705297 cri.go:89] found id: "6b2e90aa055810964c869cc420e127d674241a752607dc6842b2a44fb5d0c4f0"
	I1123 11:11:37.345474  705297 cri.go:89] found id: "79ec0f028945c9366cd1dea3a928591cacc78dd9cb18b919359c4591dd509b5b"
	I1123 11:11:37.345480  705297 cri.go:89] found id: "c967e9d7f93ee9491f74a45910a90d4ac5a80619e4fb348641b53bfc542f3d5b"
	I1123 11:11:37.345484  705297 cri.go:89] found id: "a86a7e745509cbb7107ad994a24d49af455b06ee1caa337e1ad42a41b1ce63a4"
	I1123 11:11:37.345486  705297 cri.go:89] found id: "14ce68d0c4b73cef6f9a8aff77094f3572fbac9afd09ad8be6f574da13448ffa"
	I1123 11:11:37.345489  705297 cri.go:89] found id: "21cb0a447d3b944086ee0b5509988e688b45f2688eb9d6ada2ba4aaff747f8e0"
	I1123 11:11:37.345492  705297 cri.go:89] found id: "ce8a08e19ba1ec6dae45de9cd6dcd5f735e8ad071d611bacd75625796e97de95"
	I1123 11:11:37.345495  705297 cri.go:89] found id: ""
	I1123 11:11:37.345546  705297 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:11:37.358686  705297 retry.go:31] will retry after 163.386234ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:11:37Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:11:37.523081  705297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:11:37.536768  705297 pause.go:52] kubelet running: false
	I1123 11:11:37.536832  705297 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:11:37.704658  705297 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:11:37.704746  705297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:11:37.788792  705297 cri.go:89] found id: "603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779"
	I1123 11:11:37.788813  705297 cri.go:89] found id: "8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09"
	I1123 11:11:37.788818  705297 cri.go:89] found id: "be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad"
	I1123 11:11:37.788822  705297 cri.go:89] found id: "9735bcb80c6e21f1f4da4e9d7d67ffe689415f26d9c3e928218be89122a2742a"
	I1123 11:11:37.788825  705297 cri.go:89] found id: "686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51"
	I1123 11:11:37.788829  705297 cri.go:89] found id: "b8eafc94b9395c4bfec93915f95f833d7d764c18b6dbef394cf3c7cd472463a3"
	I1123 11:11:37.788837  705297 cri.go:89] found id: "ea91a163f905eeba88b8ea7e3801829d87ba9f5d61e6bb1c910ee0a26e354d25"
	I1123 11:11:37.788840  705297 cri.go:89] found id: "6b2e90aa055810964c869cc420e127d674241a752607dc6842b2a44fb5d0c4f0"
	I1123 11:11:37.788843  705297 cri.go:89] found id: "79ec0f028945c9366cd1dea3a928591cacc78dd9cb18b919359c4591dd509b5b"
	I1123 11:11:37.788849  705297 cri.go:89] found id: "c967e9d7f93ee9491f74a45910a90d4ac5a80619e4fb348641b53bfc542f3d5b"
	I1123 11:11:37.788853  705297 cri.go:89] found id: "a86a7e745509cbb7107ad994a24d49af455b06ee1caa337e1ad42a41b1ce63a4"
	I1123 11:11:37.788855  705297 cri.go:89] found id: "14ce68d0c4b73cef6f9a8aff77094f3572fbac9afd09ad8be6f574da13448ffa"
	I1123 11:11:37.788858  705297 cri.go:89] found id: "21cb0a447d3b944086ee0b5509988e688b45f2688eb9d6ada2ba4aaff747f8e0"
	I1123 11:11:37.788861  705297 cri.go:89] found id: "ce8a08e19ba1ec6dae45de9cd6dcd5f735e8ad071d611bacd75625796e97de95"
	I1123 11:11:37.788864  705297 cri.go:89] found id: ""
	I1123 11:11:37.788916  705297 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:11:37.802204  705297 retry.go:31] will retry after 285.677243ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:11:37Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:11:38.088768  705297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:11:38.103779  705297 pause.go:52] kubelet running: false
	I1123 11:11:38.103876  705297 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:11:38.251308  705297 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:11:38.251387  705297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:11:38.321666  705297 cri.go:89] found id: "603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779"
	I1123 11:11:38.321726  705297 cri.go:89] found id: "8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09"
	I1123 11:11:38.321745  705297 cri.go:89] found id: "be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad"
	I1123 11:11:38.321766  705297 cri.go:89] found id: "9735bcb80c6e21f1f4da4e9d7d67ffe689415f26d9c3e928218be89122a2742a"
	I1123 11:11:38.321787  705297 cri.go:89] found id: "686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51"
	I1123 11:11:38.321807  705297 cri.go:89] found id: "b8eafc94b9395c4bfec93915f95f833d7d764c18b6dbef394cf3c7cd472463a3"
	I1123 11:11:38.321826  705297 cri.go:89] found id: "ea91a163f905eeba88b8ea7e3801829d87ba9f5d61e6bb1c910ee0a26e354d25"
	I1123 11:11:38.321854  705297 cri.go:89] found id: "6b2e90aa055810964c869cc420e127d674241a752607dc6842b2a44fb5d0c4f0"
	I1123 11:11:38.321874  705297 cri.go:89] found id: "79ec0f028945c9366cd1dea3a928591cacc78dd9cb18b919359c4591dd509b5b"
	I1123 11:11:38.321904  705297 cri.go:89] found id: "c967e9d7f93ee9491f74a45910a90d4ac5a80619e4fb348641b53bfc542f3d5b"
	I1123 11:11:38.321908  705297 cri.go:89] found id: "a86a7e745509cbb7107ad994a24d49af455b06ee1caa337e1ad42a41b1ce63a4"
	I1123 11:11:38.321912  705297 cri.go:89] found id: "14ce68d0c4b73cef6f9a8aff77094f3572fbac9afd09ad8be6f574da13448ffa"
	I1123 11:11:38.321915  705297 cri.go:89] found id: "21cb0a447d3b944086ee0b5509988e688b45f2688eb9d6ada2ba4aaff747f8e0"
	I1123 11:11:38.321920  705297 cri.go:89] found id: "ce8a08e19ba1ec6dae45de9cd6dcd5f735e8ad071d611bacd75625796e97de95"
	I1123 11:11:38.321923  705297 cri.go:89] found id: ""
	I1123 11:11:38.321976  705297 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:11:38.337109  705297 out.go:203] 
	W1123 11:11:38.340041  705297 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:11:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:11:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 11:11:38.340064  705297 out.go:285] * 
	* 
	W1123 11:11:38.348138  705297 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 11:11:38.351000  705297 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-851396 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-851396
helpers_test.go:243: (dbg) docker inspect pause-851396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669",
	        "Created": "2025-11-23T11:09:54.98996582Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696273,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:09:55.055385558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/hosts",
	        "LogPath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669-json.log",
	        "Name": "/pause-851396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-851396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-851396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669",
	                "LowerDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-851396",
	                "Source": "/var/lib/docker/volumes/pause-851396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-851396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-851396",
	                "name.minikube.sigs.k8s.io": "pause-851396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a81ca9c6bb83585c66c1e90825dc23d4f21ee9e5d88d384ef1405562d8f1160f",
	            "SandboxKey": "/var/run/docker/netns/a81ca9c6bb83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33767"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33768"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33771"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33769"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33770"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-851396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:66:52:4c:21:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77f5a7963cbc0840fcbcaf845d9d9be3309b173bb6a8e85ed146d2781ab597e8",
	                    "EndpointID": "195cb4320447a6af5e935b670f57b3ced20c8cc01abfc806100cb74eb39088f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-851396",
	                        "d3c5656a1394"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-851396 -n pause-851396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-851396 -n pause-851396: exit status 2 (345.02769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-851396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-851396 logs -n 25: (1.38938548s)
E1123 11:11:40.156321  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-344709 sudo systemctl cat kubelet --no-pager                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status docker --all --full --no-pager                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat docker --no-pager                                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/docker/daemon.json                                                          │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo docker system info                                                                   │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cri-dockerd --version                                                                │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat containerd --no-pager                                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/containerd/config.toml                                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo containerd config dump                                                               │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status crio --all --full --no-pager                                        │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat crio --no-pager                                                        │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo crio config                                                                          │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p cilium-344709                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ pause   │ -p pause-851396 --alsologtostderr -v=5                                                                     │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:11:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:11:29.913880  704744 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:11:29.914380  704744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:29.914442  704744 out.go:374] Setting ErrFile to fd 2...
	I1123 11:11:29.914461  704744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:29.914768  704744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:11:29.915243  704744 out.go:368] Setting JSON to false
	I1123 11:11:29.916248  704744 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14039,"bootTime":1763882251,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:11:29.916351  704744 start.go:143] virtualization:  
	I1123 11:11:29.919972  704744 out.go:179] * [force-systemd-env-613417] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:11:29.923876  704744 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:11:29.923982  704744 notify.go:221] Checking for updates...
	I1123 11:11:29.929689  704744 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:11:29.932740  704744 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:11:29.935671  704744 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:11:29.938665  704744 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:11:29.941557  704744 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1123 11:11:29.944862  704744 config.go:182] Loaded profile config "pause-851396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:11:29.945027  704744 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:11:30.005819  704744 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:11:30.005969  704744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:11:30.144353  704744 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:11:30.126699014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:11:30.144464  704744 docker.go:319] overlay module found
	I1123 11:11:30.147780  704744 out.go:179] * Using the docker driver based on user configuration
	I1123 11:11:30.150770  704744 start.go:309] selected driver: docker
	I1123 11:11:30.150799  704744 start.go:927] validating driver "docker" against <nil>
	I1123 11:11:30.150815  704744 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:11:30.151563  704744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:11:30.258476  704744 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:11:30.245773799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:11:30.258623  704744 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 11:11:30.258863  704744 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 11:11:30.261888  704744 out.go:179] * Using Docker driver with root privileges
	I1123 11:11:30.264821  704744 cni.go:84] Creating CNI manager for ""
	I1123 11:11:30.264894  704744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:11:30.264902  704744 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 11:11:30.265005  704744 start.go:353] cluster config:
	{Name:force-systemd-env-613417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-613417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:11:30.268125  704744 out.go:179] * Starting "force-systemd-env-613417" primary control-plane node in "force-systemd-env-613417" cluster
	I1123 11:11:30.271071  704744 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:11:30.273947  704744 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:11:30.276651  704744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:11:30.276697  704744 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:11:30.276707  704744 cache.go:65] Caching tarball of preloaded images
	I1123 11:11:30.276795  704744 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:11:30.276805  704744 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:11:30.276918  704744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/force-systemd-env-613417/config.json ...
	I1123 11:11:30.276937  704744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/force-systemd-env-613417/config.json: {Name:mk7b635cd35cc121b9c799624a9a217c93a1b182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:11:30.277113  704744 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:11:30.310527  704744 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:11:30.310548  704744 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:11:30.310564  704744 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:11:30.310594  704744 start.go:360] acquireMachinesLock for force-systemd-env-613417: {Name:mk1ca84ab38c833c22f2813ed0795c8158deb10a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:11:30.310696  704744 start.go:364] duration metric: took 86.36µs to acquireMachinesLock for "force-systemd-env-613417"
	I1123 11:11:30.310722  704744 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-613417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-613417 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:11:30.310795  704744 start.go:125] createHost starting for "" (driver="docker")
	I1123 11:11:27.452471  702840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:11:27.454483  702840 addons.go:530] duration metric: took 10.014889ms for enable addons: enabled=[]
	I1123 11:11:27.848934  702840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:11:27.868488  702840 node_ready.go:35] waiting up to 6m0s for node "pause-851396" to be "Ready" ...
	I1123 11:11:30.314317  704744 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 11:11:30.314564  704744 start.go:159] libmachine.API.Create for "force-systemd-env-613417" (driver="docker")
	I1123 11:11:30.314594  704744 client.go:173] LocalClient.Create starting
	I1123 11:11:30.314662  704744 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 11:11:30.314694  704744 main.go:143] libmachine: Decoding PEM data...
	I1123 11:11:30.314711  704744 main.go:143] libmachine: Parsing certificate...
	I1123 11:11:30.314762  704744 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 11:11:30.314778  704744 main.go:143] libmachine: Decoding PEM data...
	I1123 11:11:30.314798  704744 main.go:143] libmachine: Parsing certificate...
	I1123 11:11:30.315164  704744 cli_runner.go:164] Run: docker network inspect force-systemd-env-613417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 11:11:30.335629  704744 cli_runner.go:211] docker network inspect force-systemd-env-613417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 11:11:30.335708  704744 network_create.go:284] running [docker network inspect force-systemd-env-613417] to gather additional debugging logs...
	I1123 11:11:30.335731  704744 cli_runner.go:164] Run: docker network inspect force-systemd-env-613417
	W1123 11:11:30.369619  704744 cli_runner.go:211] docker network inspect force-systemd-env-613417 returned with exit code 1
	I1123 11:11:30.369649  704744 network_create.go:287] error running [docker network inspect force-systemd-env-613417]: docker network inspect force-systemd-env-613417: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-613417 not found
	I1123 11:11:30.369675  704744 network_create.go:289] output of [docker network inspect force-systemd-env-613417]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-613417 not found
	
	** /stderr **
	I1123 11:11:30.369774  704744 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:11:30.392693  704744 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
	I1123 11:11:30.392979  704744 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6aa8d6e10592 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:61:e9:d9:d2:34} reservation:<nil>}
	I1123 11:11:30.393307  704744 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b955e06248a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:f3:13:23:8c:71} reservation:<nil>}
	I1123 11:11:30.393658  704744 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-77f5a7963cbc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:af:f2:e8:1d:c7} reservation:<nil>}
	I1123 11:11:30.394070  704744 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1eef0}
	I1123 11:11:30.394086  704744 network_create.go:124] attempt to create docker network force-systemd-env-613417 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 11:11:30.394148  704744 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-613417 force-systemd-env-613417
	I1123 11:11:30.516682  704744 network_create.go:108] docker network force-systemd-env-613417 192.168.85.0/24 created
	I1123 11:11:30.516713  704744 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-613417" container
	I1123 11:11:30.516804  704744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 11:11:30.544784  704744 cli_runner.go:164] Run: docker volume create force-systemd-env-613417 --label name.minikube.sigs.k8s.io=force-systemd-env-613417 --label created_by.minikube.sigs.k8s.io=true
	I1123 11:11:30.594720  704744 oci.go:103] Successfully created a docker volume force-systemd-env-613417
	I1123 11:11:30.594800  704744 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-613417-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-613417 --entrypoint /usr/bin/test -v force-systemd-env-613417:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 11:11:31.255861  704744 oci.go:107] Successfully prepared a docker volume force-systemd-env-613417
	I1123 11:11:31.255919  704744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:11:31.255929  704744 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 11:11:31.256003  704744 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-613417:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 11:11:32.765025  702840 node_ready.go:49] node "pause-851396" is "Ready"
	I1123 11:11:32.765057  702840 node_ready.go:38] duration metric: took 4.895800741s for node "pause-851396" to be "Ready" ...
	I1123 11:11:32.765070  702840 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:11:32.765146  702840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:11:32.793236  702840 api_server.go:72] duration metric: took 5.349095453s to wait for apiserver process to appear ...
	I1123 11:11:32.793263  702840 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:11:32.793283  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:32.825458  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 11:11:32.825483  702840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 11:11:33.294082  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:33.306470  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:11:33.306514  702840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:11:33.794617  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:33.809201  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:11:33.809271  702840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:11:34.293515  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:34.304241  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:11:34.305702  702840 api_server.go:141] control plane version: v1.34.1
	I1123 11:11:34.305730  702840 api_server.go:131] duration metric: took 1.512459412s to wait for apiserver health ...
	I1123 11:11:34.305740  702840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:11:34.312718  702840 system_pods.go:59] 7 kube-system pods found
	I1123 11:11:34.312751  702840 system_pods.go:61] "coredns-66bc5c9577-rbc5g" [3f642fdf-2820-4ee7-b750-42bafbb58242] Running
	I1123 11:11:34.312756  702840 system_pods.go:61] "etcd-pause-851396" [a376ffed-4a2f-4edf-84c9-cd0d314abbe4] Running
	I1123 11:11:34.312760  702840 system_pods.go:61] "kindnet-cp9rv" [909d682e-40a7-4fb7-a79f-ba04282e4abc] Running
	I1123 11:11:34.312764  702840 system_pods.go:61] "kube-apiserver-pause-851396" [c0d1d45e-0288-43ca-897b-be2d54b07389] Running
	I1123 11:11:34.312769  702840 system_pods.go:61] "kube-controller-manager-pause-851396" [092ba549-b8ce-4ab3-97e7-8603367dc014] Running
	I1123 11:11:34.312773  702840 system_pods.go:61] "kube-proxy-btdv8" [01daa514-dacd-44f2-ac38-0983f6684774] Running
	I1123 11:11:34.312777  702840 system_pods.go:61] "kube-scheduler-pause-851396" [cf5e872f-9901-4004-aa75-6a6e5fdb6c16] Running
	I1123 11:11:34.312782  702840 system_pods.go:74] duration metric: took 7.037113ms to wait for pod list to return data ...
	I1123 11:11:34.312789  702840 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:11:34.316989  702840 default_sa.go:45] found service account: "default"
	I1123 11:11:34.317059  702840 default_sa.go:55] duration metric: took 4.263739ms for default service account to be created ...
	I1123 11:11:34.317094  702840 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:11:34.320343  702840 system_pods.go:86] 7 kube-system pods found
	I1123 11:11:34.320424  702840 system_pods.go:89] "coredns-66bc5c9577-rbc5g" [3f642fdf-2820-4ee7-b750-42bafbb58242] Running
	I1123 11:11:34.320446  702840 system_pods.go:89] "etcd-pause-851396" [a376ffed-4a2f-4edf-84c9-cd0d314abbe4] Running
	I1123 11:11:34.320467  702840 system_pods.go:89] "kindnet-cp9rv" [909d682e-40a7-4fb7-a79f-ba04282e4abc] Running
	I1123 11:11:34.320502  702840 system_pods.go:89] "kube-apiserver-pause-851396" [c0d1d45e-0288-43ca-897b-be2d54b07389] Running
	I1123 11:11:34.320528  702840 system_pods.go:89] "kube-controller-manager-pause-851396" [092ba549-b8ce-4ab3-97e7-8603367dc014] Running
	I1123 11:11:34.320548  702840 system_pods.go:89] "kube-proxy-btdv8" [01daa514-dacd-44f2-ac38-0983f6684774] Running
	I1123 11:11:34.320569  702840 system_pods.go:89] "kube-scheduler-pause-851396" [cf5e872f-9901-4004-aa75-6a6e5fdb6c16] Running
	I1123 11:11:34.320604  702840 system_pods.go:126] duration metric: took 3.479549ms to wait for k8s-apps to be running ...
	I1123 11:11:34.320630  702840 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:11:34.320713  702840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:11:34.338422  702840 system_svc.go:56] duration metric: took 17.781931ms WaitForService to wait for kubelet
	I1123 11:11:34.338494  702840 kubeadm.go:587] duration metric: took 6.894357416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:11:34.338527  702840 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:11:34.344380  702840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:11:34.344457  702840 node_conditions.go:123] node cpu capacity is 2
	I1123 11:11:34.344488  702840 node_conditions.go:105] duration metric: took 5.937704ms to run NodePressure ...
	I1123 11:11:34.344515  702840 start.go:242] waiting for startup goroutines ...
	I1123 11:11:34.344556  702840 start.go:247] waiting for cluster config update ...
	I1123 11:11:34.344579  702840 start.go:256] writing updated cluster config ...
	I1123 11:11:34.345552  702840 ssh_runner.go:195] Run: rm -f paused
	I1123 11:11:34.350444  702840 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:11:34.351043  702840 kapi.go:59] client config for pause-851396: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.key", CAFile:"/home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 11:11:34.356172  702840 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbc5g" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.364755  702840 pod_ready.go:94] pod "coredns-66bc5c9577-rbc5g" is "Ready"
	I1123 11:11:34.364827  702840 pod_ready.go:86] duration metric: took 8.626647ms for pod "coredns-66bc5c9577-rbc5g" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.368004  702840 pod_ready.go:83] waiting for pod "etcd-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.375483  702840 pod_ready.go:94] pod "etcd-pause-851396" is "Ready"
	I1123 11:11:34.375509  702840 pod_ready.go:86] duration metric: took 7.471876ms for pod "etcd-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.378218  702840 pod_ready.go:83] waiting for pod "kube-apiserver-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.384409  702840 pod_ready.go:94] pod "kube-apiserver-pause-851396" is "Ready"
	I1123 11:11:34.384436  702840 pod_ready.go:86] duration metric: took 6.176739ms for pod "kube-apiserver-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.386928  702840 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.757835  702840 pod_ready.go:94] pod "kube-controller-manager-pause-851396" is "Ready"
	I1123 11:11:34.757865  702840 pod_ready.go:86] duration metric: took 370.903534ms for pod "kube-controller-manager-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.976478  702840 pod_ready.go:83] waiting for pod "kube-proxy-btdv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:35.360767  702840 pod_ready.go:94] pod "kube-proxy-btdv8" is "Ready"
	I1123 11:11:35.360800  702840 pod_ready.go:86] duration metric: took 384.290526ms for pod "kube-proxy-btdv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:35.562604  702840 pod_ready.go:83] waiting for pod "kube-scheduler-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:36.037869  702840 pod_ready.go:94] pod "kube-scheduler-pause-851396" is "Ready"
	I1123 11:11:36.037899  702840 pod_ready.go:86] duration metric: took 475.265627ms for pod "kube-scheduler-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:36.037913  702840 pod_ready.go:40] duration metric: took 1.687382612s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:11:36.154792  702840 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:11:36.160279  702840 out.go:179] * Done! kubectl is now configured to use "pause-851396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.805766095Z" level=info msg="Started container" PID=2333 containerID=9735bcb80c6e21f1f4da4e9d7d67ffe689415f26d9c3e928218be89122a2742a description=kube-system/etcd-pause-851396/etcd id=d3f89f7a-3025-4387-be7f-54dbc29bd1d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d6ecd4c28e5a88968446677d21a7a5653c15b5736e949abb7824b3fac54a68d
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.81779933Z" level=info msg="Created container 686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51: kube-system/kube-scheduler-pause-851396/kube-scheduler" id=6519a74c-40d3-40aa-b6a6-287f7004434b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.818564706Z" level=info msg="Starting container: 686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51" id=8f99d640-f40e-48c2-8833-aa23fb219eab name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.836593811Z" level=info msg="Started container" PID=2331 containerID=686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51 description=kube-system/kube-scheduler-pause-851396/kube-scheduler id=8f99d640-f40e-48c2-8833-aa23fb219eab name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb213035cee3800ddc8f42a034774bd50bcf3429d5f9cb2413cda7fa396fc842
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.846867534Z" level=info msg="Created container 8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09: kube-system/kindnet-cp9rv/kindnet-cni" id=d1ad674b-2f65-4f07-bf24-64d214246c5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.854468527Z" level=info msg="Starting container: 8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09" id=f16a30f0-c11b-4752-8455-a732a60f1b52 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.863487179Z" level=info msg="Started container" PID=2340 containerID=8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09 description=kube-system/kindnet-cp9rv/kindnet-cni id=f16a30f0-c11b-4752-8455-a732a60f1b52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=564d8cbf24f3e184262964d741f41933c9bed4ee52080a133d3dd20f97e91ac3
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.863882991Z" level=info msg="Created container 603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779: kube-system/coredns-66bc5c9577-rbc5g/coredns" id=a82aac4e-0dac-454a-a816-e93df389f857 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.868905049Z" level=info msg="Starting container: 603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779" id=9e0a0148-b8a1-41c0-bb86-d0849dbb9a9f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.873323965Z" level=info msg="Started container" PID=2349 containerID=603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779 description=kube-system/coredns-66bc5c9577-rbc5g/coredns id=9e0a0148-b8a1-41c0-bb86-d0849dbb9a9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d9d476bd1c11d3cc380a0211e9520ae8bb3436a99008179ddc9c2277217b898
	Nov 23 11:11:27 pause-851396 crio[2076]: time="2025-11-23T11:11:27.345666248Z" level=info msg="Created container be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad: kube-system/kube-proxy-btdv8/kube-proxy" id=45996a59-e4de-4a51-aeca-d7e4cdc3424c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:27 pause-851396 crio[2076]: time="2025-11-23T11:11:27.352738915Z" level=info msg="Starting container: be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad" id=7762ac14-d62c-46e2-b261-abc5b0e7b8c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:27 pause-851396 crio[2076]: time="2025-11-23T11:11:27.373681278Z" level=info msg="Started container" PID=2334 containerID=be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad description=kube-system/kube-proxy-btdv8/kube-proxy id=7762ac14-d62c-46e2-b261-abc5b0e7b8c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ecd97145c0d7b31f3583e7bebe0acf0d8b2b51454655c34f4cb48aefc7c327e
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.414324578Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.421712564Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.421867741Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.421945576Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.426309911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.426475616Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.426553951Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.432252391Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.432403957Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.432474309Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.438107542Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.438267314Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	603e268a44605       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   12 seconds ago       Running             coredns                   1                   8d9d476bd1c11       coredns-66bc5c9577-rbc5g               kube-system
	8788d5d59d8ab       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   12 seconds ago       Running             kindnet-cni               1                   564d8cbf24f3e       kindnet-cp9rv                          kube-system
	be77bc16f22dc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   12 seconds ago       Running             kube-proxy                1                   3ecd97145c0d7       kube-proxy-btdv8                       kube-system
	9735bcb80c6e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago       Running             etcd                      1                   2d6ecd4c28e5a       etcd-pause-851396                      kube-system
	686b26511fd82       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago       Running             kube-scheduler            1                   eb213035cee38       kube-scheduler-pause-851396            kube-system
	b8eafc94b9395       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago       Running             kube-controller-manager   1                   f0602806a729b       kube-controller-manager-pause-851396   kube-system
	ea91a163f905e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago       Running             kube-apiserver            1                   ed7885a04bf90       kube-apiserver-pause-851396            kube-system
	6b2e90aa05581       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Exited              coredns                   0                   8d9d476bd1c11       coredns-66bc5c9577-rbc5g               kube-system
	79ec0f028945c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   564d8cbf24f3e       kindnet-cp9rv                          kube-system
	c967e9d7f93ee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   3ecd97145c0d7       kube-proxy-btdv8                       kube-system
	a86a7e745509c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   ed7885a04bf90       kube-apiserver-pause-851396            kube-system
	14ce68d0c4b73       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   eb213035cee38       kube-scheduler-pause-851396            kube-system
	21cb0a447d3b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   2d6ecd4c28e5a       etcd-pause-851396                      kube-system
	ce8a08e19ba1e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   f0602806a729b       kube-controller-manager-pause-851396   kube-system
	
	
	==> coredns [603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48541 - 18344 "HINFO IN 6474641182252598523.6858030621578313824. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011973927s
	
	
	==> coredns [6b2e90aa055810964c869cc420e127d674241a752607dc6842b2a44fb5d0c4f0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51671 - 29961 "HINFO IN 2626729832729729817.1798040002292881018. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004069275s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-851396
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-851396
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=pause-851396
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_10_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:10:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-851396
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:11:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:11:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-851396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                c212ee43-2380-4ad2-8c59-c05d7390901a
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rbc5g                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     68s
	  kube-system                 etcd-pause-851396                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         73s
	  kube-system                 kindnet-cp9rv                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      68s
	  kube-system                 kube-apiserver-pause-851396             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-pause-851396    200m (10%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-proxy-btdv8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-scheduler-pause-851396             100m (5%)     0 (0%)      0 (0%)           0 (0%)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 66s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     83s (x7 over 84s)  kubelet          Node pause-851396 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    83s (x8 over 84s)  kubelet          Node pause-851396 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  83s (x9 over 84s)  kubelet          Node pause-851396 status is now: NodeHasSufficientMemory
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s                kubelet          Node pause-851396 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s                kubelet          Node pause-851396 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s                kubelet          Node pause-851396 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           69s                node-controller  Node pause-851396 event: Registered Node pause-851396 in Controller
	  Normal   NodeReady                27s                kubelet          Node pause-851396 status is now: NodeReady
	  Normal   RegisteredNode           3s                 node-controller  Node pause-851396 event: Registered Node pause-851396 in Controller
	
	
	==> dmesg <==
	[Nov23 10:45] overlayfs: idmapped layers are currently not supported
	[  +3.779904] overlayfs: idmapped layers are currently not supported
	[Nov23 10:46] overlayfs: idmapped layers are currently not supported
	[Nov23 10:47] overlayfs: idmapped layers are currently not supported
	[Nov23 10:49] overlayfs: idmapped layers are currently not supported
	[Nov23 10:53] overlayfs: idmapped layers are currently not supported
	[Nov23 10:54] overlayfs: idmapped layers are currently not supported
	[Nov23 10:55] overlayfs: idmapped layers are currently not supported
	[Nov23 10:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [21cb0a447d3b944086ee0b5509988e688b45f2688eb9d6ada2ba4aaff747f8e0] <==
	{"level":"warn","ts":"2025-11-23T11:10:20.979367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.036964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.093694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.179227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.235548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.309754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.517456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T11:11:17.955646Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T11:11:17.955705Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-851396","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-23T11:11:17.955811Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T11:11:18.238675Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-11-23T11:11:18.239144Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-23T11:11:18.239190Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T11:11:18.239210Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-11-23T11:11:18.239125Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239504Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239532Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T11:11:18.239541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239579Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239592Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T11:11:18.239599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T11:11:18.242625Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-23T11:11:18.242715Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T11:11:18.242747Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T11:11:18.242755Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-851396","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [9735bcb80c6e21f1f4da4e9d7d67ffe689415f26d9c3e928218be89122a2742a] <==
	{"level":"warn","ts":"2025-11-23T11:11:30.191843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.266446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.283404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.304070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.343956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.389676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.457687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.479033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.508003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.541947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.587982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.603270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.637989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.652454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.776866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.813480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.852434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.891652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.944892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.983395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.009325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.060102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.095294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.112178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.233261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39982","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:11:39 up  3:54,  0 user,  load average: 4.60, 3.62, 2.66
	Linux pause-851396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79ec0f028945c9366cd1dea3a928591cacc78dd9cb18b919359c4591dd509b5b] <==
	I1123 11:10:32.589211       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:10:32.594867       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:10:32.595062       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:10:32.595105       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:10:32.595143       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:10:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:10:32.817116       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:10:32.817136       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:10:32.817145       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:10:32.817451       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:11:02.812897       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:11:02.817402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 11:11:02.817734       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:11:02.817784       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 11:11:04.317735       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:11:04.317863       1 metrics.go:72] Registering metrics
	I1123 11:11:04.317991       1 controller.go:711] "Syncing nftables rules"
	I1123 11:11:12.818254       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:11:12.818312       1 main.go:301] handling current node
	
	
	==> kindnet [8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09] <==
	I1123 11:11:27.158739       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:11:27.189657       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:11:27.189817       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:11:27.189830       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:11:27.189841       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:11:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:11:27.418178       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:11:27.418210       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:11:27.418222       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:11:27.418915       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 11:11:33.221480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:11:33.221598       1 metrics.go:72] Registering metrics
	I1123 11:11:33.221714       1 controller.go:711] "Syncing nftables rules"
	I1123 11:11:37.413563       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:11:37.413654       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a86a7e745509cbb7107ad994a24d49af455b06ee1caa337e1ad42a41b1ce63a4] <==
	W1123 11:11:17.986545       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.986633       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.986846       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987121       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.988798       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989087       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989210       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989309       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989396       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989739       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989827       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989908       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990097       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990155       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990231       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987570       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987616       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987643       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987669       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987696       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987723       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990512       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990568       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990616       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990667       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ea91a163f905eeba88b8ea7e3801829d87ba9f5d61e6bb1c910ee0a26e354d25] <==
	I1123 11:11:33.063483       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 11:11:33.093918       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:11:33.091264       1 policy_source.go:240] refreshing policies
	I1123 11:11:33.063496       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:11:33.083202       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 11:11:33.136166       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:11:33.136782       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:11:33.136887       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 11:11:33.138261       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:11:33.138349       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:11:33.138361       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:11:33.141944       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1123 11:11:33.138470       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:11:33.140652       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:11:33.157326       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:11:33.157549       1 aggregator.go:171] initial CRD sync complete...
	I1123 11:11:33.157648       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 11:11:33.157677       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:11:33.157706       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:11:33.385055       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:11:34.667150       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:11:36.500947       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:11:36.519145       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:11:36.562236       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:11:36.703877       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b8eafc94b9395c4bfec93915f95f833d7d764c18b6dbef394cf3c7cd472463a3] <==
	I1123 11:11:36.078982       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 11:11:36.079118       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 11:11:36.081693       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:11:36.086722       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:11:36.086882       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:11:36.086924       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 11:11:36.087018       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 11:11:36.087076       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 11:11:36.087107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 11:11:36.087135       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 11:11:36.087584       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:11:36.089702       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:11:36.090770       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:11:36.091123       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:11:36.091360       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-851396"
	I1123 11:11:36.091557       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:11:36.101516       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:11:36.102752       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 11:11:36.104261       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 11:11:36.109478       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:11:36.106969       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:11:36.275209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:11:36.275457       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:11:36.275488       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:11:36.311046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [ce8a08e19ba1ec6dae45de9cd6dcd5f735e8ad071d611bacd75625796e97de95] <==
	I1123 11:10:30.638628       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:10:30.638724       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:10:30.638783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-851396"
	I1123 11:10:30.638814       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 11:10:30.639332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:10:30.639885       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-851396" podCIDRs=["10.244.0.0/24"]
	I1123 11:10:30.640181       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:10:30.640526       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:10:30.640653       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:10:30.640682       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 11:10:30.649632       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 11:10:30.650524       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:10:30.650832       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 11:10:30.670590       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:10:30.674286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:10:30.696882       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:10:30.696929       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 11:10:30.708447       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 11:10:30.717518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 11:10:30.725989       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:10:30.734451       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 11:10:30.738068       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:10:30.738087       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:10:30.738093       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:11:15.644821       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad] <==
	I1123 11:11:28.848551       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:11:30.083749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:11:33.201466       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:11:33.201541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:11:33.201641       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:11:34.119743       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:11:34.119869       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:11:34.125201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:11:34.125765       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:11:34.125979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:11:34.128766       1 config.go:200] "Starting service config controller"
	I1123 11:11:34.128842       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:11:34.128884       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:11:34.128930       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:11:34.128968       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:11:34.129018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:11:34.130634       1 config.go:309] "Starting node config controller"
	I1123 11:11:34.130715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:11:34.130764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:11:34.229499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:11:34.229570       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 11:11:34.229803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c967e9d7f93ee9491f74a45910a90d4ac5a80619e4fb348641b53bfc542f3d5b] <==
	I1123 11:10:32.383606       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:10:32.510580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:10:32.613274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:10:32.613313       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:10:32.613382       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:10:32.676662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:10:32.676784       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:10:32.682126       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:10:32.682497       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:10:32.682715       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:10:32.686031       1 config.go:200] "Starting service config controller"
	I1123 11:10:32.686047       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:10:32.686070       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:10:32.686074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:10:32.686086       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:10:32.686092       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:10:32.686843       1 config.go:309] "Starting node config controller"
	I1123 11:10:32.686914       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:10:32.686944       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:10:32.789511       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:10:32.789547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:10:32.789586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14ce68d0c4b73cef6f9a8aff77094f3572fbac9afd09ad8be6f574da13448ffa] <==
	E1123 11:10:23.500912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:10:23.501096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:10:23.501143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:10:23.501200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 11:10:23.501241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:10:23.508223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:10:23.508463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:10:23.508510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:10:23.508624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:10:23.508678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:10:23.508815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 11:10:23.508855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:10:23.508928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:10:23.508980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:10:23.509015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:10:23.509160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:10:23.509202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:10:23.509276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1123 11:10:25.090348       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:17.958475       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 11:11:17.958496       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 11:11:17.958516       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 11:11:17.958540       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:17.958732       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 11:11:17.958746       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51] <==
	I1123 11:11:31.275122       1 serving.go:386] Generated self-signed cert in-memory
	I1123 11:11:34.012530       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:11:34.012630       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:11:34.021058       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 11:11:34.021095       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 11:11:34.021135       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:34.021142       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:34.021158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:11:34.021164       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:11:34.026542       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:11:34.026630       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:11:34.122164       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:11:34.122253       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:34.122185       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.544772    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cp9rv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="909d682e-40a7-4fb7-a79f-ba04282e4abc" pod="kube-system/kindnet-cp9rv"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.545253    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-851396\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e2239c41ec8369ff04473fd27b15bba7" pod="kube-system/kube-scheduler-pause-851396"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.545655    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-851396\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cf8839e94c7c142d74f85b195b59dd2f" pod="kube-system/etcd-pause-851396"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.546031    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-851396\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d22d3fe4550d7a7aec9720862b6578b5" pod="kube-system/kube-apiserver-pause-851396"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: I1123 11:11:26.650188    1310 scope.go:117] "RemoveContainer" containerID="6b2e90aa055810964c869cc420e127d674241a752607dc6842b2a44fb5d0c4f0"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.674187    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-851396\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.674882    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-cp9rv\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="909d682e-40a7-4fb7-a79f-ba04282e4abc" pod="kube-system/kindnet-cp9rv"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.686252    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-851396\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.734353    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-rbc5g\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="3f642fdf-2820-4ee7-b750-42bafbb58242" pod="kube-system/coredns-66bc5c9577-rbc5g"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.773775    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="e2239c41ec8369ff04473fd27b15bba7" pod="kube-system/kube-scheduler-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.813719    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="cf8839e94c7c142d74f85b195b59dd2f" pod="kube-system/etcd-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.835958    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="d22d3fe4550d7a7aec9720862b6578b5" pod="kube-system/kube-apiserver-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.843575    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="4bf99c13bd44edd9822086d13efa7db0" pod="kube-system/kube-controller-manager-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.858959    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-btdv8\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="01daa514-dacd-44f2-ac38-0983f6684774" pod="kube-system/kube-proxy-btdv8"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.868975    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="e2239c41ec8369ff04473fd27b15bba7" pod="kube-system/kube-scheduler-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.881796    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="cf8839e94c7c142d74f85b195b59dd2f" pod="kube-system/etcd-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.884224    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="d22d3fe4550d7a7aec9720862b6578b5" pod="kube-system/kube-apiserver-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.885541    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="4bf99c13bd44edd9822086d13efa7db0" pod="kube-system/kube-controller-manager-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.886650    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-btdv8\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="01daa514-dacd-44f2-ac38-0983f6684774" pod="kube-system/kube-proxy-btdv8"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.887743    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-cp9rv\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="909d682e-40a7-4fb7-a79f-ba04282e4abc" pod="kube-system/kindnet-cp9rv"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.891728    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-rbc5g\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="3f642fdf-2820-4ee7-b750-42bafbb58242" pod="kube-system/coredns-66bc5c9577-rbc5g"
	Nov 23 11:11:36 pause-851396 kubelet[1310]: W1123 11:11:36.479845    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 23 11:11:37 pause-851396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:11:37 pause-851396 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:11:37 pause-851396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-851396 -n pause-851396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-851396 -n pause-851396: exit status 2 (456.378297ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-851396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-851396
helpers_test.go:243: (dbg) docker inspect pause-851396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669",
	        "Created": "2025-11-23T11:09:54.98996582Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696273,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:09:55.055385558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/hosts",
	        "LogPath": "/var/lib/docker/containers/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669/d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669-json.log",
	        "Name": "/pause-851396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-851396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-851396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d3c5656a1394526b5f7534daf32f877afc480cbc1867fae0bd04e36f99909669",
	                "LowerDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c86b8c84ef78453da13d036112969489ca33fd3d4b7adb450083b0914417726a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-851396",
	                "Source": "/var/lib/docker/volumes/pause-851396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-851396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-851396",
	                "name.minikube.sigs.k8s.io": "pause-851396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a81ca9c6bb83585c66c1e90825dc23d4f21ee9e5d88d384ef1405562d8f1160f",
	            "SandboxKey": "/var/run/docker/netns/a81ca9c6bb83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33767"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33768"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33771"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33769"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33770"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-851396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:66:52:4c:21:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77f5a7963cbc0840fcbcaf845d9d9be3309b173bb6a8e85ed146d2781ab597e8",
	                    "EndpointID": "195cb4320447a6af5e935b670f57b3ced20c8cc01abfc806100cb74eb39088f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-851396",
	                        "d3c5656a1394"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-851396 -n pause-851396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-851396 -n pause-851396: exit status 2 (421.598255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-851396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-851396 logs -n 25: (1.731428292s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-344709 sudo systemctl cat kubelet --no-pager                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status docker --all --full --no-pager                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat docker --no-pager                                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/docker/daemon.json                                                          │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo docker system info                                                                   │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cri-dockerd --version                                                                │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat containerd --no-pager                                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/containerd/config.toml                                                      │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo containerd config dump                                                               │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status crio --all --full --no-pager                                        │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat crio --no-pager                                                        │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo crio config                                                                          │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p cilium-344709                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ pause   │ -p pause-851396 --alsologtostderr -v=5                                                                     │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:11:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:11:29.913880  704744 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:11:29.914380  704744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:29.914442  704744 out.go:374] Setting ErrFile to fd 2...
	I1123 11:11:29.914461  704744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:29.914768  704744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:11:29.915243  704744 out.go:368] Setting JSON to false
	I1123 11:11:29.916248  704744 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14039,"bootTime":1763882251,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:11:29.916351  704744 start.go:143] virtualization:  
	I1123 11:11:29.919972  704744 out.go:179] * [force-systemd-env-613417] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:11:29.923876  704744 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:11:29.923982  704744 notify.go:221] Checking for updates...
	I1123 11:11:29.929689  704744 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:11:29.932740  704744 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:11:29.935671  704744 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:11:29.938665  704744 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:11:29.941557  704744 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1123 11:11:29.944862  704744 config.go:182] Loaded profile config "pause-851396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:11:29.945027  704744 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:11:30.005819  704744 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:11:30.005969  704744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:11:30.144353  704744 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:11:30.126699014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:11:30.144464  704744 docker.go:319] overlay module found
	I1123 11:11:30.147780  704744 out.go:179] * Using the docker driver based on user configuration
	I1123 11:11:30.150770  704744 start.go:309] selected driver: docker
	I1123 11:11:30.150799  704744 start.go:927] validating driver "docker" against <nil>
	I1123 11:11:30.150815  704744 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:11:30.151563  704744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:11:30.258476  704744 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:11:30.245773799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:11:30.258623  704744 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 11:11:30.258863  704744 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 11:11:30.261888  704744 out.go:179] * Using Docker driver with root privileges
	I1123 11:11:30.264821  704744 cni.go:84] Creating CNI manager for ""
	I1123 11:11:30.264894  704744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:11:30.264902  704744 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 11:11:30.265005  704744 start.go:353] cluster config:
	{Name:force-systemd-env-613417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-613417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:11:30.268125  704744 out.go:179] * Starting "force-systemd-env-613417" primary control-plane node in "force-systemd-env-613417" cluster
	I1123 11:11:30.271071  704744 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:11:30.273947  704744 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:11:30.276651  704744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:11:30.276697  704744 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:11:30.276707  704744 cache.go:65] Caching tarball of preloaded images
	I1123 11:11:30.276795  704744 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:11:30.276805  704744 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:11:30.276918  704744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/force-systemd-env-613417/config.json ...
	I1123 11:11:30.276937  704744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/force-systemd-env-613417/config.json: {Name:mk7b635cd35cc121b9c799624a9a217c93a1b182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:11:30.277113  704744 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:11:30.310527  704744 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:11:30.310548  704744 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:11:30.310564  704744 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:11:30.310594  704744 start.go:360] acquireMachinesLock for force-systemd-env-613417: {Name:mk1ca84ab38c833c22f2813ed0795c8158deb10a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:11:30.310696  704744 start.go:364] duration metric: took 86.36µs to acquireMachinesLock for "force-systemd-env-613417"
	I1123 11:11:30.310722  704744 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-613417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-613417 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:11:30.310795  704744 start.go:125] createHost starting for "" (driver="docker")
	I1123 11:11:27.452471  702840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:11:27.454483  702840 addons.go:530] duration metric: took 10.014889ms for enable addons: enabled=[]
	I1123 11:11:27.848934  702840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:11:27.868488  702840 node_ready.go:35] waiting up to 6m0s for node "pause-851396" to be "Ready" ...
	I1123 11:11:30.314317  704744 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 11:11:30.314564  704744 start.go:159] libmachine.API.Create for "force-systemd-env-613417" (driver="docker")
	I1123 11:11:30.314594  704744 client.go:173] LocalClient.Create starting
	I1123 11:11:30.314662  704744 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 11:11:30.314694  704744 main.go:143] libmachine: Decoding PEM data...
	I1123 11:11:30.314711  704744 main.go:143] libmachine: Parsing certificate...
	I1123 11:11:30.314762  704744 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 11:11:30.314778  704744 main.go:143] libmachine: Decoding PEM data...
	I1123 11:11:30.314798  704744 main.go:143] libmachine: Parsing certificate...
	I1123 11:11:30.315164  704744 cli_runner.go:164] Run: docker network inspect force-systemd-env-613417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 11:11:30.335629  704744 cli_runner.go:211] docker network inspect force-systemd-env-613417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 11:11:30.335708  704744 network_create.go:284] running [docker network inspect force-systemd-env-613417] to gather additional debugging logs...
	I1123 11:11:30.335731  704744 cli_runner.go:164] Run: docker network inspect force-systemd-env-613417
	W1123 11:11:30.369619  704744 cli_runner.go:211] docker network inspect force-systemd-env-613417 returned with exit code 1
	I1123 11:11:30.369649  704744 network_create.go:287] error running [docker network inspect force-systemd-env-613417]: docker network inspect force-systemd-env-613417: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-613417 not found
	I1123 11:11:30.369675  704744 network_create.go:289] output of [docker network inspect force-systemd-env-613417]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-613417 not found
	
	** /stderr **
	I1123 11:11:30.369774  704744 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:11:30.392693  704744 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
	I1123 11:11:30.392979  704744 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6aa8d6e10592 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:61:e9:d9:d2:34} reservation:<nil>}
	I1123 11:11:30.393307  704744 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b955e06248a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:f3:13:23:8c:71} reservation:<nil>}
	I1123 11:11:30.393658  704744 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-77f5a7963cbc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:af:f2:e8:1d:c7} reservation:<nil>}
	I1123 11:11:30.394070  704744 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1eef0}
	I1123 11:11:30.394086  704744 network_create.go:124] attempt to create docker network force-systemd-env-613417 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 11:11:30.394148  704744 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-613417 force-systemd-env-613417
	I1123 11:11:30.516682  704744 network_create.go:108] docker network force-systemd-env-613417 192.168.85.0/24 created
	I1123 11:11:30.516713  704744 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-613417" container
	I1123 11:11:30.516804  704744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 11:11:30.544784  704744 cli_runner.go:164] Run: docker volume create force-systemd-env-613417 --label name.minikube.sigs.k8s.io=force-systemd-env-613417 --label created_by.minikube.sigs.k8s.io=true
	I1123 11:11:30.594720  704744 oci.go:103] Successfully created a docker volume force-systemd-env-613417
	I1123 11:11:30.594800  704744 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-613417-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-613417 --entrypoint /usr/bin/test -v force-systemd-env-613417:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 11:11:31.255861  704744 oci.go:107] Successfully prepared a docker volume force-systemd-env-613417
	I1123 11:11:31.255919  704744 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:11:31.255929  704744 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 11:11:31.256003  704744 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-613417:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 11:11:32.765025  702840 node_ready.go:49] node "pause-851396" is "Ready"
	I1123 11:11:32.765057  702840 node_ready.go:38] duration metric: took 4.895800741s for node "pause-851396" to be "Ready" ...
	I1123 11:11:32.765070  702840 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:11:32.765146  702840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:11:32.793236  702840 api_server.go:72] duration metric: took 5.349095453s to wait for apiserver process to appear ...
	I1123 11:11:32.793263  702840 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:11:32.793283  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:32.825458  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 11:11:32.825483  702840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 11:11:33.294082  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:33.306470  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:11:33.306514  702840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:11:33.794617  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:33.809201  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:11:33.809271  702840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:11:34.293515  702840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:11:34.304241  702840 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:11:34.305702  702840 api_server.go:141] control plane version: v1.34.1
	I1123 11:11:34.305730  702840 api_server.go:131] duration metric: took 1.512459412s to wait for apiserver health ...
	I1123 11:11:34.305740  702840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:11:34.312718  702840 system_pods.go:59] 7 kube-system pods found
	I1123 11:11:34.312751  702840 system_pods.go:61] "coredns-66bc5c9577-rbc5g" [3f642fdf-2820-4ee7-b750-42bafbb58242] Running
	I1123 11:11:34.312756  702840 system_pods.go:61] "etcd-pause-851396" [a376ffed-4a2f-4edf-84c9-cd0d314abbe4] Running
	I1123 11:11:34.312760  702840 system_pods.go:61] "kindnet-cp9rv" [909d682e-40a7-4fb7-a79f-ba04282e4abc] Running
	I1123 11:11:34.312764  702840 system_pods.go:61] "kube-apiserver-pause-851396" [c0d1d45e-0288-43ca-897b-be2d54b07389] Running
	I1123 11:11:34.312769  702840 system_pods.go:61] "kube-controller-manager-pause-851396" [092ba549-b8ce-4ab3-97e7-8603367dc014] Running
	I1123 11:11:34.312773  702840 system_pods.go:61] "kube-proxy-btdv8" [01daa514-dacd-44f2-ac38-0983f6684774] Running
	I1123 11:11:34.312777  702840 system_pods.go:61] "kube-scheduler-pause-851396" [cf5e872f-9901-4004-aa75-6a6e5fdb6c16] Running
	I1123 11:11:34.312782  702840 system_pods.go:74] duration metric: took 7.037113ms to wait for pod list to return data ...
	I1123 11:11:34.312789  702840 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:11:34.316989  702840 default_sa.go:45] found service account: "default"
	I1123 11:11:34.317059  702840 default_sa.go:55] duration metric: took 4.263739ms for default service account to be created ...
	I1123 11:11:34.317094  702840 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:11:34.320343  702840 system_pods.go:86] 7 kube-system pods found
	I1123 11:11:34.320424  702840 system_pods.go:89] "coredns-66bc5c9577-rbc5g" [3f642fdf-2820-4ee7-b750-42bafbb58242] Running
	I1123 11:11:34.320446  702840 system_pods.go:89] "etcd-pause-851396" [a376ffed-4a2f-4edf-84c9-cd0d314abbe4] Running
	I1123 11:11:34.320467  702840 system_pods.go:89] "kindnet-cp9rv" [909d682e-40a7-4fb7-a79f-ba04282e4abc] Running
	I1123 11:11:34.320502  702840 system_pods.go:89] "kube-apiserver-pause-851396" [c0d1d45e-0288-43ca-897b-be2d54b07389] Running
	I1123 11:11:34.320528  702840 system_pods.go:89] "kube-controller-manager-pause-851396" [092ba549-b8ce-4ab3-97e7-8603367dc014] Running
	I1123 11:11:34.320548  702840 system_pods.go:89] "kube-proxy-btdv8" [01daa514-dacd-44f2-ac38-0983f6684774] Running
	I1123 11:11:34.320569  702840 system_pods.go:89] "kube-scheduler-pause-851396" [cf5e872f-9901-4004-aa75-6a6e5fdb6c16] Running
	I1123 11:11:34.320604  702840 system_pods.go:126] duration metric: took 3.479549ms to wait for k8s-apps to be running ...
	I1123 11:11:34.320630  702840 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:11:34.320713  702840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:11:34.338422  702840 system_svc.go:56] duration metric: took 17.781931ms WaitForService to wait for kubelet
	I1123 11:11:34.338494  702840 kubeadm.go:587] duration metric: took 6.894357416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:11:34.338527  702840 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:11:34.344380  702840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:11:34.344457  702840 node_conditions.go:123] node cpu capacity is 2
	I1123 11:11:34.344488  702840 node_conditions.go:105] duration metric: took 5.937704ms to run NodePressure ...
	I1123 11:11:34.344515  702840 start.go:242] waiting for startup goroutines ...
	I1123 11:11:34.344556  702840 start.go:247] waiting for cluster config update ...
	I1123 11:11:34.344579  702840 start.go:256] writing updated cluster config ...
	I1123 11:11:34.345552  702840 ssh_runner.go:195] Run: rm -f paused
	I1123 11:11:34.350444  702840 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:11:34.351043  702840 kapi.go:59] client config for pause-851396: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.key", CAFile:"/home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 11:11:34.356172  702840 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbc5g" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.364755  702840 pod_ready.go:94] pod "coredns-66bc5c9577-rbc5g" is "Ready"
	I1123 11:11:34.364827  702840 pod_ready.go:86] duration metric: took 8.626647ms for pod "coredns-66bc5c9577-rbc5g" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.368004  702840 pod_ready.go:83] waiting for pod "etcd-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.375483  702840 pod_ready.go:94] pod "etcd-pause-851396" is "Ready"
	I1123 11:11:34.375509  702840 pod_ready.go:86] duration metric: took 7.471876ms for pod "etcd-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.378218  702840 pod_ready.go:83] waiting for pod "kube-apiserver-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.384409  702840 pod_ready.go:94] pod "kube-apiserver-pause-851396" is "Ready"
	I1123 11:11:34.384436  702840 pod_ready.go:86] duration metric: took 6.176739ms for pod "kube-apiserver-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.386928  702840 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.757835  702840 pod_ready.go:94] pod "kube-controller-manager-pause-851396" is "Ready"
	I1123 11:11:34.757865  702840 pod_ready.go:86] duration metric: took 370.903534ms for pod "kube-controller-manager-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:34.976478  702840 pod_ready.go:83] waiting for pod "kube-proxy-btdv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:35.360767  702840 pod_ready.go:94] pod "kube-proxy-btdv8" is "Ready"
	I1123 11:11:35.360800  702840 pod_ready.go:86] duration metric: took 384.290526ms for pod "kube-proxy-btdv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:35.562604  702840 pod_ready.go:83] waiting for pod "kube-scheduler-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:36.037869  702840 pod_ready.go:94] pod "kube-scheduler-pause-851396" is "Ready"
	I1123 11:11:36.037899  702840 pod_ready.go:86] duration metric: took 475.265627ms for pod "kube-scheduler-pause-851396" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:11:36.037913  702840 pod_ready.go:40] duration metric: took 1.687382612s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:11:36.154792  702840 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:11:36.160279  702840 out.go:179] * Done! kubectl is now configured to use "pause-851396" cluster and "default" namespace by default
	I1123 11:11:35.805318  704744 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-613417:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.549276546s)
	I1123 11:11:35.805362  704744 kic.go:203] duration metric: took 4.549420647s to extract preloaded images to volume ...
	W1123 11:11:35.805519  704744 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:11:35.805627  704744 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:11:35.861004  704744 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-613417 --name force-systemd-env-613417 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-613417 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-613417 --network force-systemd-env-613417 --ip 192.168.85.2 --volume force-systemd-env-613417:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:11:36.325423  704744 cli_runner.go:164] Run: docker container inspect force-systemd-env-613417 --format={{.State.Running}}
	I1123 11:11:36.376403  704744 cli_runner.go:164] Run: docker container inspect force-systemd-env-613417 --format={{.State.Status}}
	I1123 11:11:36.424684  704744 cli_runner.go:164] Run: docker exec force-systemd-env-613417 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:11:36.531288  704744 oci.go:144] the created container "force-systemd-env-613417" has a running status.
	I1123 11:11:36.531316  704744 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/force-systemd-env-613417/id_rsa...
	I1123 11:11:36.698587  704744 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/force-systemd-env-613417/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1123 11:11:36.698636  704744 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/force-systemd-env-613417/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:11:36.724819  704744 cli_runner.go:164] Run: docker container inspect force-systemd-env-613417 --format={{.State.Status}}
	I1123 11:11:36.750248  704744 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:11:36.750272  704744 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-613417 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:11:36.804451  704744 cli_runner.go:164] Run: docker container inspect force-systemd-env-613417 --format={{.State.Status}}
	I1123 11:11:36.832602  704744 machine.go:94] provisionDockerMachine start ...
	I1123 11:11:36.832691  704744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-613417
	I1123 11:11:36.861506  704744 main.go:143] libmachine: Using SSH client type: native
	I1123 11:11:36.861835  704744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33777 <nil> <nil>}
	I1123 11:11:36.861844  704744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:11:36.862499  704744 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	
	
	==> CRI-O <==
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.805766095Z" level=info msg="Started container" PID=2333 containerID=9735bcb80c6e21f1f4da4e9d7d67ffe689415f26d9c3e928218be89122a2742a description=kube-system/etcd-pause-851396/etcd id=d3f89f7a-3025-4387-be7f-54dbc29bd1d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d6ecd4c28e5a88968446677d21a7a5653c15b5736e949abb7824b3fac54a68d
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.81779933Z" level=info msg="Created container 686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51: kube-system/kube-scheduler-pause-851396/kube-scheduler" id=6519a74c-40d3-40aa-b6a6-287f7004434b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.818564706Z" level=info msg="Starting container: 686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51" id=8f99d640-f40e-48c2-8833-aa23fb219eab name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.836593811Z" level=info msg="Started container" PID=2331 containerID=686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51 description=kube-system/kube-scheduler-pause-851396/kube-scheduler id=8f99d640-f40e-48c2-8833-aa23fb219eab name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb213035cee3800ddc8f42a034774bd50bcf3429d5f9cb2413cda7fa396fc842
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.846867534Z" level=info msg="Created container 8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09: kube-system/kindnet-cp9rv/kindnet-cni" id=d1ad674b-2f65-4f07-bf24-64d214246c5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.854468527Z" level=info msg="Starting container: 8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09" id=f16a30f0-c11b-4752-8455-a732a60f1b52 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.863487179Z" level=info msg="Started container" PID=2340 containerID=8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09 description=kube-system/kindnet-cp9rv/kindnet-cni id=f16a30f0-c11b-4752-8455-a732a60f1b52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=564d8cbf24f3e184262964d741f41933c9bed4ee52080a133d3dd20f97e91ac3
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.863882991Z" level=info msg="Created container 603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779: kube-system/coredns-66bc5c9577-rbc5g/coredns" id=a82aac4e-0dac-454a-a816-e93df389f857 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.868905049Z" level=info msg="Starting container: 603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779" id=9e0a0148-b8a1-41c0-bb86-d0849dbb9a9f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:26 pause-851396 crio[2076]: time="2025-11-23T11:11:26.873323965Z" level=info msg="Started container" PID=2349 containerID=603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779 description=kube-system/coredns-66bc5c9577-rbc5g/coredns id=9e0a0148-b8a1-41c0-bb86-d0849dbb9a9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d9d476bd1c11d3cc380a0211e9520ae8bb3436a99008179ddc9c2277217b898
	Nov 23 11:11:27 pause-851396 crio[2076]: time="2025-11-23T11:11:27.345666248Z" level=info msg="Created container be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad: kube-system/kube-proxy-btdv8/kube-proxy" id=45996a59-e4de-4a51-aeca-d7e4cdc3424c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:11:27 pause-851396 crio[2076]: time="2025-11-23T11:11:27.352738915Z" level=info msg="Starting container: be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad" id=7762ac14-d62c-46e2-b261-abc5b0e7b8c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:11:27 pause-851396 crio[2076]: time="2025-11-23T11:11:27.373681278Z" level=info msg="Started container" PID=2334 containerID=be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad description=kube-system/kube-proxy-btdv8/kube-proxy id=7762ac14-d62c-46e2-b261-abc5b0e7b8c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ecd97145c0d7b31f3583e7bebe0acf0d8b2b51454655c34f4cb48aefc7c327e
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.414324578Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.421712564Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.421867741Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.421945576Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.426309911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.426475616Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.426553951Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.432252391Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.432403957Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.432474309Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.438107542Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:11:37 pause-851396 crio[2076]: time="2025-11-23T11:11:37.438267314Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	603e268a44605       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   15 seconds ago       Running             coredns                   1                   8d9d476bd1c11       coredns-66bc5c9577-rbc5g               kube-system
	8788d5d59d8ab       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   15 seconds ago       Running             kindnet-cni               1                   564d8cbf24f3e       kindnet-cp9rv                          kube-system
	be77bc16f22dc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   15 seconds ago       Running             kube-proxy                1                   3ecd97145c0d7       kube-proxy-btdv8                       kube-system
	9735bcb80c6e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago       Running             etcd                      1                   2d6ecd4c28e5a       etcd-pause-851396                      kube-system
	686b26511fd82       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago       Running             kube-scheduler            1                   eb213035cee38       kube-scheduler-pause-851396            kube-system
	b8eafc94b9395       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago       Running             kube-controller-manager   1                   f0602806a729b       kube-controller-manager-pause-851396   kube-system
	ea91a163f905e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago       Running             kube-apiserver            1                   ed7885a04bf90       kube-apiserver-pause-851396            kube-system
	6b2e90aa05581       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   28 seconds ago       Exited              coredns                   0                   8d9d476bd1c11       coredns-66bc5c9577-rbc5g               kube-system
	79ec0f028945c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   564d8cbf24f3e       kindnet-cp9rv                          kube-system
	c967e9d7f93ee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   3ecd97145c0d7       kube-proxy-btdv8                       kube-system
	a86a7e745509c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   ed7885a04bf90       kube-apiserver-pause-851396            kube-system
	14ce68d0c4b73       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   eb213035cee38       kube-scheduler-pause-851396            kube-system
	21cb0a447d3b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   2d6ecd4c28e5a       etcd-pause-851396                      kube-system
	ce8a08e19ba1e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   f0602806a729b       kube-controller-manager-pause-851396   kube-system
	
	
	==> coredns [603e268a44605591597f8af267b964a4cd12c4609e0fb8e8bf758de4a2e49779] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48541 - 18344 "HINFO IN 6474641182252598523.6858030621578313824. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011973927s
	
	
	==> coredns [6b2e90aa055810964c869cc420e127d674241a752607dc6842b2a44fb5d0c4f0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51671 - 29961 "HINFO IN 2626729832729729817.1798040002292881018. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004069275s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-851396
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-851396
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=pause-851396
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_10_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:10:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-851396
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:11:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:11:12 +0000   Sun, 23 Nov 2025 11:11:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-851396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                c212ee43-2380-4ad2-8c59-c05d7390901a
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rbc5g                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     71s
	  kube-system                 etcd-pause-851396                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         76s
	  kube-system                 kindnet-cp9rv                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      71s
	  kube-system                 kube-apiserver-pause-851396             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-pause-851396    200m (10%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-proxy-btdv8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-pause-851396             100m (5%)     0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 69s                kube-proxy       
	  Normal   Starting                 8s                 kube-proxy       
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     86s (x7 over 87s)  kubelet          Node pause-851396 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    86s (x8 over 87s)  kubelet          Node pause-851396 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  86s (x9 over 87s)  kubelet          Node pause-851396 status is now: NodeHasSufficientMemory
	  Normal   Starting                 77s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 77s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  76s                kubelet          Node pause-851396 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s                kubelet          Node pause-851396 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s                kubelet          Node pause-851396 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           72s                node-controller  Node pause-851396 event: Registered Node pause-851396 in Controller
	  Normal   NodeReady                30s                kubelet          Node pause-851396 status is now: NodeReady
	  Normal   RegisteredNode           6s                 node-controller  Node pause-851396 event: Registered Node pause-851396 in Controller
	
	
	==> dmesg <==
	[Nov23 10:45] overlayfs: idmapped layers are currently not supported
	[  +3.779904] overlayfs: idmapped layers are currently not supported
	[Nov23 10:46] overlayfs: idmapped layers are currently not supported
	[Nov23 10:47] overlayfs: idmapped layers are currently not supported
	[Nov23 10:49] overlayfs: idmapped layers are currently not supported
	[Nov23 10:53] overlayfs: idmapped layers are currently not supported
	[Nov23 10:54] overlayfs: idmapped layers are currently not supported
	[Nov23 10:55] overlayfs: idmapped layers are currently not supported
	[Nov23 10:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [21cb0a447d3b944086ee0b5509988e688b45f2688eb9d6ada2ba4aaff747f8e0] <==
	{"level":"warn","ts":"2025-11-23T11:10:20.979367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.036964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.093694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.179227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.235548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.309754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:10:21.517456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T11:11:17.955646Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T11:11:17.955705Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-851396","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-23T11:11:17.955811Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T11:11:18.238675Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-11-23T11:11:18.239144Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-23T11:11:18.239190Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T11:11:18.239210Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-11-23T11:11:18.239125Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239504Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239532Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T11:11:18.239541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239579Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T11:11:18.239592Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T11:11:18.239599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T11:11:18.242625Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-23T11:11:18.242715Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T11:11:18.242747Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T11:11:18.242755Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-851396","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [9735bcb80c6e21f1f4da4e9d7d67ffe689415f26d9c3e928218be89122a2742a] <==
	{"level":"warn","ts":"2025-11-23T11:11:30.191843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.266446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.283404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.304070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.343956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.389676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.457687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.479033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.508003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.541947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.587982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.603270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.637989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.652454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.776866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.813480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.852434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.891652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.944892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:30.983395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.009325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.060102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.095294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.112178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:11:31.233261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39982","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:11:42 up  3:54,  0 user,  load average: 4.60, 3.62, 2.66
	Linux pause-851396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79ec0f028945c9366cd1dea3a928591cacc78dd9cb18b919359c4591dd509b5b] <==
	I1123 11:10:32.589211       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:10:32.594867       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:10:32.595062       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:10:32.595105       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:10:32.595143       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:10:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:10:32.817116       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:10:32.817136       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:10:32.817145       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:10:32.817451       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:11:02.812897       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:11:02.817402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 11:11:02.817734       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:11:02.817784       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 11:11:04.317735       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:11:04.317863       1 metrics.go:72] Registering metrics
	I1123 11:11:04.317991       1 controller.go:711] "Syncing nftables rules"
	I1123 11:11:12.818254       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:11:12.818312       1 main.go:301] handling current node
	
	
	==> kindnet [8788d5d59d8ab3e2ceafd47f6d10b987c975c5ac69d80c63fee8fd0198f1ab09] <==
	I1123 11:11:27.158739       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:11:27.189657       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:11:27.189817       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:11:27.189830       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:11:27.189841       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:11:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:11:27.418178       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:11:27.418210       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:11:27.418222       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:11:27.418915       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 11:11:33.221480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:11:33.221598       1 metrics.go:72] Registering metrics
	I1123 11:11:33.221714       1 controller.go:711] "Syncing nftables rules"
	I1123 11:11:37.413563       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:11:37.413654       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a86a7e745509cbb7107ad994a24d49af455b06ee1caa337e1ad42a41b1ce63a4] <==
	W1123 11:11:17.986545       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.986633       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.986846       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987121       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.988798       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989087       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989210       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989309       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989396       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989739       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989827       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.989908       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990097       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990155       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990231       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987570       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987616       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987643       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987669       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987696       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.987723       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990512       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990568       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990616       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 11:11:17.990667       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ea91a163f905eeba88b8ea7e3801829d87ba9f5d61e6bb1c910ee0a26e354d25] <==
	I1123 11:11:33.063483       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 11:11:33.093918       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:11:33.091264       1 policy_source.go:240] refreshing policies
	I1123 11:11:33.063496       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:11:33.083202       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 11:11:33.136166       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:11:33.136782       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:11:33.136887       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 11:11:33.138261       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:11:33.138349       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:11:33.138361       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:11:33.141944       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1123 11:11:33.138470       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:11:33.140652       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:11:33.157326       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:11:33.157549       1 aggregator.go:171] initial CRD sync complete...
	I1123 11:11:33.157648       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 11:11:33.157677       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:11:33.157706       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:11:33.385055       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:11:34.667150       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:11:36.500947       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:11:36.519145       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:11:36.562236       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:11:36.703877       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b8eafc94b9395c4bfec93915f95f833d7d764c18b6dbef394cf3c7cd472463a3] <==
	I1123 11:11:36.078982       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 11:11:36.079118       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 11:11:36.081693       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:11:36.086722       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:11:36.086882       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:11:36.086924       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 11:11:36.087018       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 11:11:36.087076       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 11:11:36.087107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 11:11:36.087135       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 11:11:36.087584       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:11:36.089702       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:11:36.090770       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:11:36.091123       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:11:36.091360       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-851396"
	I1123 11:11:36.091557       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:11:36.101516       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:11:36.102752       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 11:11:36.104261       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 11:11:36.109478       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:11:36.106969       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:11:36.275209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:11:36.275457       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:11:36.275488       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:11:36.311046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [ce8a08e19ba1ec6dae45de9cd6dcd5f735e8ad071d611bacd75625796e97de95] <==
	I1123 11:10:30.638628       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:10:30.638724       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:10:30.638783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-851396"
	I1123 11:10:30.638814       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 11:10:30.639332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:10:30.639885       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-851396" podCIDRs=["10.244.0.0/24"]
	I1123 11:10:30.640181       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:10:30.640526       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:10:30.640653       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:10:30.640682       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 11:10:30.649632       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 11:10:30.650524       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:10:30.650832       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 11:10:30.670590       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:10:30.674286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:10:30.696882       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:10:30.696929       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 11:10:30.708447       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 11:10:30.717518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 11:10:30.725989       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:10:30.734451       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 11:10:30.738068       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:10:30.738087       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:10:30.738093       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:11:15.644821       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [be77bc16f22dc57f4908fdd0bb6f90e934f7dad13ef1f7f5b99b394a949572ad] <==
	I1123 11:11:28.848551       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:11:30.083749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:11:33.201466       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:11:33.201541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:11:33.201641       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:11:34.119743       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:11:34.119869       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:11:34.125201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:11:34.125765       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:11:34.125979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:11:34.128766       1 config.go:200] "Starting service config controller"
	I1123 11:11:34.128842       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:11:34.128884       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:11:34.128930       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:11:34.128968       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:11:34.129018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:11:34.130634       1 config.go:309] "Starting node config controller"
	I1123 11:11:34.130715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:11:34.130764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:11:34.229499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:11:34.229570       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 11:11:34.229803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c967e9d7f93ee9491f74a45910a90d4ac5a80619e4fb348641b53bfc542f3d5b] <==
	I1123 11:10:32.383606       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:10:32.510580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:10:32.613274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:10:32.613313       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:10:32.613382       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:10:32.676662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:10:32.676784       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:10:32.682126       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:10:32.682497       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:10:32.682715       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:10:32.686031       1 config.go:200] "Starting service config controller"
	I1123 11:10:32.686047       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:10:32.686070       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:10:32.686074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:10:32.686086       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:10:32.686092       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:10:32.686843       1 config.go:309] "Starting node config controller"
	I1123 11:10:32.686914       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:10:32.686944       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:10:32.789511       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:10:32.789547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:10:32.789586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14ce68d0c4b73cef6f9a8aff77094f3572fbac9afd09ad8be6f574da13448ffa] <==
	E1123 11:10:23.500912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:10:23.501096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:10:23.501143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:10:23.501200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 11:10:23.501241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:10:23.508223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:10:23.508463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:10:23.508510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:10:23.508624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:10:23.508678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:10:23.508815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 11:10:23.508855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:10:23.508928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:10:23.508980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:10:23.509015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:10:23.509160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:10:23.509202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:10:23.509276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1123 11:10:25.090348       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:17.958475       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 11:11:17.958496       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 11:11:17.958516       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 11:11:17.958540       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:17.958732       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 11:11:17.958746       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [686b26511fd826f38eef8464e2a5327e3b451a84abeaf74efd5319c54461ac51] <==
	I1123 11:11:31.275122       1 serving.go:386] Generated self-signed cert in-memory
	I1123 11:11:34.012530       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:11:34.012630       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:11:34.021058       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 11:11:34.021095       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 11:11:34.021135       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:34.021142       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:34.021158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:11:34.021164       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:11:34.026542       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:11:34.026630       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:11:34.122164       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:11:34.122253       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:11:34.122185       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.544772    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cp9rv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="909d682e-40a7-4fb7-a79f-ba04282e4abc" pod="kube-system/kindnet-cp9rv"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.545253    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-851396\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e2239c41ec8369ff04473fd27b15bba7" pod="kube-system/kube-scheduler-pause-851396"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.545655    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-851396\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cf8839e94c7c142d74f85b195b59dd2f" pod="kube-system/etcd-pause-851396"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: E1123 11:11:26.546031    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-851396\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d22d3fe4550d7a7aec9720862b6578b5" pod="kube-system/kube-apiserver-pause-851396"
	Nov 23 11:11:26 pause-851396 kubelet[1310]: I1123 11:11:26.650188    1310 scope.go:117] "RemoveContainer" containerID="6b2e90aa055810964c869cc420e127d674241a752607dc6842b2a44fb5d0c4f0"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.674187    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-851396\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.674882    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-cp9rv\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="909d682e-40a7-4fb7-a79f-ba04282e4abc" pod="kube-system/kindnet-cp9rv"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.686252    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-851396\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.734353    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-rbc5g\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="3f642fdf-2820-4ee7-b750-42bafbb58242" pod="kube-system/coredns-66bc5c9577-rbc5g"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.773775    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="e2239c41ec8369ff04473fd27b15bba7" pod="kube-system/kube-scheduler-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.813719    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="cf8839e94c7c142d74f85b195b59dd2f" pod="kube-system/etcd-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.835958    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="d22d3fe4550d7a7aec9720862b6578b5" pod="kube-system/kube-apiserver-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.843575    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="4bf99c13bd44edd9822086d13efa7db0" pod="kube-system/kube-controller-manager-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.858959    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-btdv8\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="01daa514-dacd-44f2-ac38-0983f6684774" pod="kube-system/kube-proxy-btdv8"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.868975    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="e2239c41ec8369ff04473fd27b15bba7" pod="kube-system/kube-scheduler-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.881796    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="cf8839e94c7c142d74f85b195b59dd2f" pod="kube-system/etcd-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.884224    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="d22d3fe4550d7a7aec9720862b6578b5" pod="kube-system/kube-apiserver-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.885541    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-851396\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="4bf99c13bd44edd9822086d13efa7db0" pod="kube-system/kube-controller-manager-pause-851396"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.886650    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-btdv8\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="01daa514-dacd-44f2-ac38-0983f6684774" pod="kube-system/kube-proxy-btdv8"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.887743    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-cp9rv\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="909d682e-40a7-4fb7-a79f-ba04282e4abc" pod="kube-system/kindnet-cp9rv"
	Nov 23 11:11:32 pause-851396 kubelet[1310]: E1123 11:11:32.891728    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-rbc5g\" is forbidden: User \"system:node:pause-851396\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-851396' and this object" podUID="3f642fdf-2820-4ee7-b750-42bafbb58242" pod="kube-system/coredns-66bc5c9577-rbc5g"
	Nov 23 11:11:36 pause-851396 kubelet[1310]: W1123 11:11:36.479845    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 23 11:11:37 pause-851396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:11:37 pause-851396 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:11:37 pause-851396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-851396 -n pause-851396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-851396 -n pause-851396: exit status 2 (431.695246ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-851396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.297273ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:13:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-378086 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-378086 describe deploy/metrics-server -n kube-system: exit status 1 (81.461568ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-378086 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-378086
helpers_test.go:243: (dbg) docker inspect old-k8s-version-378086:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388",
	        "Created": "2025-11-23T11:12:54.956037881Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 713967,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:12:55.04151766Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/hostname",
	        "HostsPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/hosts",
	        "LogPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388-json.log",
	        "Name": "/old-k8s-version-378086",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378086:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378086",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388",
	                "LowerDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378086",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378086/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378086",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378086",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378086",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21a15903a0f64f92546c3f13877e1ddc46334360a58fc99be257a4c111c2c61c",
	            "SandboxKey": "/var/run/docker/netns/21a15903a0f6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378086": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:b5:a3:5b:7d:2e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad991492cc1b5405599bff7adffac92b2e633269fafa0d884a2cf0b41e4105f6",
	                    "EndpointID": "df145df9408ef5d00d9053dc00161bd92ce556b6f4fc50d0bb0e9e3b6e83220d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-378086",
	                        "c67933f5eb0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-378086 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-378086 logs -n 25: (1.483926966s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-344709 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo containerd config dump                                                                                                                                                                                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo crio config                                                                                                                                                                                                             │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p cilium-344709                                                                                                                                                                                                                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ pause   │ -p pause-851396 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p pause-851396                                                                                                                                                                                                                               │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p force-systemd-env-613417                                                                                                                                                                                                                   │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p cert-options-700578 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ cert-options-700578 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ -p cert-options-700578 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p cert-options-700578                                                                                                                                                                                                                        │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:12:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:12:48.848016  713582 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:12:48.848162  713582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:12:48.848174  713582 out.go:374] Setting ErrFile to fd 2...
	I1123 11:12:48.848180  713582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:12:48.848552  713582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:12:48.849095  713582 out.go:368] Setting JSON to false
	I1123 11:12:48.850772  713582 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14118,"bootTime":1763882251,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:12:48.850897  713582 start.go:143] virtualization:  
	I1123 11:12:48.854462  713582 out.go:179] * [old-k8s-version-378086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:12:48.859021  713582 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:12:48.859253  713582 notify.go:221] Checking for updates...
	I1123 11:12:48.865612  713582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:12:48.868884  713582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:12:48.872151  713582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:12:48.875272  713582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:12:48.878433  713582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:12:48.882052  713582 config.go:182] Loaded profile config "cert-expiration-629387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:12:48.882167  713582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:12:48.911623  713582 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:12:48.911748  713582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:12:48.972775  713582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:12:48.962672206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:12:48.972885  713582 docker.go:319] overlay module found
	I1123 11:12:48.976036  713582 out.go:179] * Using the docker driver based on user configuration
	I1123 11:12:48.979008  713582 start.go:309] selected driver: docker
	I1123 11:12:48.979073  713582 start.go:927] validating driver "docker" against <nil>
	I1123 11:12:48.979088  713582 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:12:48.979979  713582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:12:49.049255  713582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:12:49.033718343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:12:49.049753  713582 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 11:12:49.049996  713582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:12:49.053089  713582 out.go:179] * Using Docker driver with root privileges
	I1123 11:12:49.056053  713582 cni.go:84] Creating CNI manager for ""
	I1123 11:12:49.056128  713582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:12:49.056141  713582 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 11:12:49.056235  713582 start.go:353] cluster config:
	{Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:12:49.059458  713582 out.go:179] * Starting "old-k8s-version-378086" primary control-plane node in "old-k8s-version-378086" cluster
	I1123 11:12:49.062287  713582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:12:49.065374  713582 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:12:49.068384  713582 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 11:12:49.068457  713582 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 11:12:49.068473  713582 cache.go:65] Caching tarball of preloaded images
	I1123 11:12:49.068559  713582 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:12:49.068575  713582 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 11:12:49.068678  713582 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/config.json ...
	I1123 11:12:49.068699  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/config.json: {Name:mk683f979021f2d763f31c781e56f74fa7f0cceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:12:49.068870  713582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:12:49.100315  713582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:12:49.100343  713582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:12:49.100363  713582 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:12:49.100394  713582 start.go:360] acquireMachinesLock for old-k8s-version-378086: {Name:mkfc344308e200b270c60104d70fe97a5903afde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:12:49.100499  713582 start.go:364] duration metric: took 88.469µs to acquireMachinesLock for "old-k8s-version-378086"
	I1123 11:12:49.100532  713582 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:12:49.100668  713582 start.go:125] createHost starting for "" (driver="docker")
	I1123 11:12:49.106062  713582 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 11:12:49.106342  713582 start.go:159] libmachine.API.Create for "old-k8s-version-378086" (driver="docker")
	I1123 11:12:49.106379  713582 client.go:173] LocalClient.Create starting
	I1123 11:12:49.106457  713582 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 11:12:49.106494  713582 main.go:143] libmachine: Decoding PEM data...
	I1123 11:12:49.106514  713582 main.go:143] libmachine: Parsing certificate...
	I1123 11:12:49.106567  713582 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 11:12:49.106589  713582 main.go:143] libmachine: Decoding PEM data...
	I1123 11:12:49.106601  713582 main.go:143] libmachine: Parsing certificate...
	I1123 11:12:49.106963  713582 cli_runner.go:164] Run: docker network inspect old-k8s-version-378086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 11:12:49.123656  713582 cli_runner.go:211] docker network inspect old-k8s-version-378086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 11:12:49.123738  713582 network_create.go:284] running [docker network inspect old-k8s-version-378086] to gather additional debugging logs...
	I1123 11:12:49.123759  713582 cli_runner.go:164] Run: docker network inspect old-k8s-version-378086
	W1123 11:12:49.140569  713582 cli_runner.go:211] docker network inspect old-k8s-version-378086 returned with exit code 1
	I1123 11:12:49.140601  713582 network_create.go:287] error running [docker network inspect old-k8s-version-378086]: docker network inspect old-k8s-version-378086: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-378086 not found
	I1123 11:12:49.140620  713582 network_create.go:289] output of [docker network inspect old-k8s-version-378086]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-378086 not found
	
	** /stderr **
	I1123 11:12:49.140735  713582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:12:49.155682  713582 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
	I1123 11:12:49.156030  713582 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6aa8d6e10592 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:61:e9:d9:d2:34} reservation:<nil>}
	I1123 11:12:49.156509  713582 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b955e06248a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:f3:13:23:8c:71} reservation:<nil>}
	I1123 11:12:49.156764  713582 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8208ef5d0c77 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:74:8b:4b:db:11} reservation:<nil>}
	I1123 11:12:49.157210  713582 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a09450}
	I1123 11:12:49.157234  713582 network_create.go:124] attempt to create docker network old-k8s-version-378086 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 11:12:49.157376  713582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-378086 old-k8s-version-378086
	I1123 11:12:49.222732  713582 network_create.go:108] docker network old-k8s-version-378086 192.168.85.0/24 created
	I1123 11:12:49.222768  713582 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-378086" container
	I1123 11:12:49.222858  713582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 11:12:49.240226  713582 cli_runner.go:164] Run: docker volume create old-k8s-version-378086 --label name.minikube.sigs.k8s.io=old-k8s-version-378086 --label created_by.minikube.sigs.k8s.io=true
	I1123 11:12:49.257090  713582 oci.go:103] Successfully created a docker volume old-k8s-version-378086
	I1123 11:12:49.257190  713582 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-378086-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-378086 --entrypoint /usr/bin/test -v old-k8s-version-378086:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 11:12:49.796202  713582 oci.go:107] Successfully prepared a docker volume old-k8s-version-378086
	I1123 11:12:49.796269  713582 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 11:12:49.796279  713582 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 11:12:49.796357  713582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-378086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 11:12:54.877880  713582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-378086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.08147784s)
	I1123 11:12:54.877913  713582 kic.go:203] duration metric: took 5.081631269s to extract preloaded images to volume ...
	W1123 11:12:54.878047  713582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:12:54.878169  713582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:12:54.940397  713582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-378086 --name old-k8s-version-378086 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-378086 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-378086 --network old-k8s-version-378086 --ip 192.168.85.2 --volume old-k8s-version-378086:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:12:55.282327  713582 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Running}}
	I1123 11:12:55.305168  713582 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:12:55.329049  713582 cli_runner.go:164] Run: docker exec old-k8s-version-378086 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:12:55.383588  713582 oci.go:144] the created container "old-k8s-version-378086" has a running status.
	I1123 11:12:55.383623  713582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa...
	I1123 11:12:55.900762  713582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:12:55.920497  713582 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:12:55.939201  713582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:12:55.939230  713582 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-378086 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:12:55.990832  713582 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:12:56.010860  713582 machine.go:94] provisionDockerMachine start ...
	I1123 11:12:56.010992  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:12:56.029758  713582 main.go:143] libmachine: Using SSH client type: native
	I1123 11:12:56.030113  713582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1123 11:12:56.030123  713582 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:12:56.030860  713582 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48384->127.0.0.1:33792: read: connection reset by peer
	I1123 11:12:59.184912  713582 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378086
	
	I1123 11:12:59.184935  713582 ubuntu.go:182] provisioning hostname "old-k8s-version-378086"
	I1123 11:12:59.184997  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:12:59.203075  713582 main.go:143] libmachine: Using SSH client type: native
	I1123 11:12:59.203390  713582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1123 11:12:59.203406  713582 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-378086 && echo "old-k8s-version-378086" | sudo tee /etc/hostname
	I1123 11:12:59.364153  713582 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378086
	
	I1123 11:12:59.364245  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:12:59.382102  713582 main.go:143] libmachine: Using SSH client type: native
	I1123 11:12:59.382420  713582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1123 11:12:59.382445  713582 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-378086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-378086/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-378086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:12:59.537970  713582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:12:59.537996  713582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:12:59.538025  713582 ubuntu.go:190] setting up certificates
	I1123 11:12:59.538037  713582 provision.go:84] configureAuth start
	I1123 11:12:59.538098  713582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:12:59.555538  713582 provision.go:143] copyHostCerts
	I1123 11:12:59.555606  713582 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:12:59.555618  713582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:12:59.555699  713582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:12:59.555805  713582 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:12:59.555815  713582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:12:59.555843  713582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:12:59.555904  713582 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:12:59.555914  713582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:12:59.555939  713582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:12:59.555989  713582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-378086 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-378086]
	I1123 11:12:59.691995  713582 provision.go:177] copyRemoteCerts
	I1123 11:12:59.692066  713582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:12:59.692105  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:12:59.710382  713582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:12:59.816831  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 11:12:59.833491  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:12:59.852453  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 11:12:59.871215  713582 provision.go:87] duration metric: took 333.153973ms to configureAuth
	I1123 11:12:59.871245  713582 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:12:59.871436  713582 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:12:59.871547  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:12:59.889662  713582 main.go:143] libmachine: Using SSH client type: native
	I1123 11:12:59.890042  713582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1123 11:12:59.890060  713582 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:13:00.344422  713582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:13:00.344453  713582 machine.go:97] duration metric: took 4.333567323s to provisionDockerMachine
	I1123 11:13:00.344464  713582 client.go:176] duration metric: took 11.238073609s to LocalClient.Create
	I1123 11:13:00.344480  713582 start.go:167] duration metric: took 11.238139317s to libmachine.API.Create "old-k8s-version-378086"
	I1123 11:13:00.344487  713582 start.go:293] postStartSetup for "old-k8s-version-378086" (driver="docker")
	I1123 11:13:00.344499  713582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:13:00.344595  713582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:13:00.344654  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:13:00.370713  713582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:13:00.486256  713582 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:13:00.490080  713582 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:13:00.490106  713582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:13:00.490118  713582 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:13:00.490202  713582 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:13:00.490286  713582 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:13:00.490384  713582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:13:00.498144  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:13:00.519121  713582 start.go:296] duration metric: took 174.619352ms for postStartSetup
	I1123 11:13:00.519513  713582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:13:00.536155  713582 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/config.json ...
	I1123 11:13:00.536429  713582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:13:00.536476  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:13:00.553809  713582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:13:00.658703  713582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:13:00.663349  713582 start.go:128] duration metric: took 11.56265715s to createHost
	I1123 11:13:00.663371  713582 start.go:83] releasing machines lock for "old-k8s-version-378086", held for 11.562856734s
	I1123 11:13:00.663457  713582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:13:00.680860  713582 ssh_runner.go:195] Run: cat /version.json
	I1123 11:13:00.680925  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:13:00.680949  713582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:13:00.681047  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:13:00.707308  713582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:13:00.708264  713582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:13:00.901188  713582 ssh_runner.go:195] Run: systemctl --version
	I1123 11:13:00.907991  713582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:13:00.951972  713582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:13:00.956650  713582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:13:00.956777  713582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:13:00.992342  713582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 11:13:00.992367  713582 start.go:496] detecting cgroup driver to use...
	I1123 11:13:00.992399  713582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:13:00.992447  713582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:13:01.013905  713582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:13:01.027050  713582 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:13:01.027159  713582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:13:01.045359  713582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:13:01.064865  713582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:13:01.192020  713582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:13:01.322940  713582 docker.go:234] disabling docker service ...
	I1123 11:13:01.323011  713582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:13:01.344674  713582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:13:01.358880  713582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:13:01.471409  713582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:13:01.591661  713582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:13:01.606980  713582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:13:01.622615  713582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 11:13:01.622694  713582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:13:01.631869  713582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:13:01.631934  713582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:13:01.643242  713582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:13:01.652302  713582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:13:01.661340  713582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:13:01.675850  713582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:13:01.686448  713582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:13:01.700870  713582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:13:01.710724  713582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:13:01.718556  713582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:13:01.727578  713582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:13:01.860111  713582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:13:02.058845  713582 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:13:02.058916  713582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:13:02.062740  713582 start.go:564] Will wait 60s for crictl version
	I1123 11:13:02.062804  713582 ssh_runner.go:195] Run: which crictl
	I1123 11:13:02.066284  713582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:13:02.091652  713582 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:13:02.091755  713582 ssh_runner.go:195] Run: crio --version
	I1123 11:13:02.122779  713582 ssh_runner.go:195] Run: crio --version
	I1123 11:13:02.155676  713582 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 11:13:02.158608  713582 cli_runner.go:164] Run: docker network inspect old-k8s-version-378086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:13:02.175399  713582 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:13:02.179389  713582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:13:02.189509  713582 kubeadm.go:884] updating cluster {Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:13:02.189621  713582 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 11:13:02.189671  713582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:13:02.222812  713582 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:13:02.222837  713582 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:13:02.222894  713582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:13:02.249443  713582 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:13:02.249527  713582 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:13:02.249549  713582 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1123 11:13:02.249671  713582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-378086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:13:02.249771  713582 ssh_runner.go:195] Run: crio config
	I1123 11:13:02.307413  713582 cni.go:84] Creating CNI manager for ""
	I1123 11:13:02.307485  713582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:13:02.307518  713582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:13:02.307568  713582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-378086 NodeName:old-k8s-version-378086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:13:02.307752  713582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-378086"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:13:02.307842  713582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 11:13:02.315638  713582 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:13:02.315774  713582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:13:02.323534  713582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 11:13:02.335711  713582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:13:02.350316  713582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 11:13:02.363241  713582 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:13:02.366959  713582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:13:02.376874  713582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:13:02.495445  713582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:13:02.516407  713582 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086 for IP: 192.168.85.2
	I1123 11:13:02.516476  713582 certs.go:195] generating shared ca certs ...
	I1123 11:13:02.516517  713582 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:02.516699  713582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:13:02.516779  713582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:13:02.516801  713582 certs.go:257] generating profile certs ...
	I1123 11:13:02.516870  713582 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.key
	I1123 11:13:02.516914  713582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt with IP's: []
	I1123 11:13:02.627909  713582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt ...
	I1123 11:13:02.627943  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: {Name:mkafa8bd1db01ef576a29375678021a972e995f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:02.628145  713582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.key ...
	I1123 11:13:02.628160  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.key: {Name:mk4174cd011a82d6f50016be5a2bbf3bc5d0f638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:02.628259  713582 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key.0966a661
	I1123 11:13:02.628276  713582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt.0966a661 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 11:13:03.035679  713582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt.0966a661 ...
	I1123 11:13:03.035718  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt.0966a661: {Name:mk413a1eeee89df0904056f54dd3dd74e11b3f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:03.035912  713582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key.0966a661 ...
	I1123 11:13:03.035929  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key.0966a661: {Name:mkc8fb0dbf0ceabcf32f8f8120291043d0ecc8ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:03.036032  713582 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt.0966a661 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt
	I1123 11:13:03.036122  713582 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key.0966a661 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key
	I1123 11:13:03.036187  713582 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key
	I1123 11:13:03.036209  713582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.crt with IP's: []
	I1123 11:13:03.259001  713582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.crt ...
	I1123 11:13:03.259031  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.crt: {Name:mkac5c1b87b59134614cf5b050528709f7ab8488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:03.259216  713582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key ...
	I1123 11:13:03.259229  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key: {Name:mkf961df33ffc0a75dcb45c0d419f9407a773cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:03.259421  713582 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:13:03.259478  713582 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:13:03.259493  713582 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:13:03.259522  713582 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:13:03.259552  713582 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:13:03.259580  713582 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:13:03.259630  713582 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:13:03.260260  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:13:03.279258  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:13:03.297797  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:13:03.317380  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:13:03.336288  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 11:13:03.359789  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:13:03.382119  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:13:03.408958  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:13:03.433249  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:13:03.451964  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:13:03.469680  713582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:13:03.487921  713582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:13:03.500777  713582 ssh_runner.go:195] Run: openssl version
	I1123 11:13:03.507173  713582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:13:03.515509  713582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:13:03.519423  713582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:13:03.519522  713582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:13:03.560043  713582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:13:03.568197  713582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:13:03.576578  713582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:13:03.581288  713582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:13:03.581446  713582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:13:03.627466  713582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:13:03.635661  713582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:13:03.643864  713582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:13:03.647561  713582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:13:03.647627  713582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:13:03.688454  713582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:13:03.696587  713582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:13:03.700100  713582 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:13:03.700163  713582 kubeadm.go:401] StartCluster: {Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:13:03.700237  713582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:13:03.700298  713582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:13:03.726416  713582 cri.go:89] found id: ""
	I1123 11:13:03.726484  713582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:13:03.740669  713582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:13:03.748696  713582 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:13:03.748801  713582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:13:03.757346  713582 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:13:03.757368  713582 kubeadm.go:158] found existing configuration files:
	
	I1123 11:13:03.757449  713582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 11:13:03.765245  713582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:13:03.765337  713582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:13:03.773986  713582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 11:13:03.782246  713582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:13:03.782332  713582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:13:03.789670  713582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 11:13:03.798392  713582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:13:03.798506  713582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:13:03.807197  713582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 11:13:03.815375  713582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:13:03.815490  713582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:13:03.822949  713582 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:13:03.874148  713582 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 11:13:03.874655  713582 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:13:03.919712  713582 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:13:03.919870  713582 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:13:03.919955  713582 kubeadm.go:319] OS: Linux
	I1123 11:13:03.920038  713582 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:13:03.920128  713582 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:13:03.920205  713582 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:13:03.920285  713582 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:13:03.920395  713582 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:13:03.920483  713582 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:13:03.920559  713582 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:13:03.920641  713582 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:13:03.920723  713582 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:13:04.007224  713582 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:13:04.007394  713582 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:13:04.007527  713582 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 11:13:04.168320  713582 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:13:04.171392  713582 out.go:252]   - Generating certificates and keys ...
	I1123 11:13:04.171574  713582 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:13:04.171691  713582 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 11:13:04.958083  713582 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:13:05.186740  713582 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:13:05.529960  713582 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:13:06.078068  713582 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:13:06.883177  713582 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:13:06.883328  713582 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-378086] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:13:07.720089  713582 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:13:07.720574  713582 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-378086] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:13:08.101957  713582 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:13:08.597934  713582 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:13:09.032849  713582 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:13:09.033221  713582 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:13:09.437202  713582 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:13:09.992032  713582 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:13:10.363623  713582 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:13:10.973950  713582 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:13:10.974868  713582 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:13:10.984451  713582 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 11:13:10.988125  713582 out.go:252]   - Booting up control plane ...
	I1123 11:13:10.988238  713582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:13:10.988322  713582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:13:10.988400  713582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:13:11.006081  713582 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:13:11.007404  713582 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:13:11.007724  713582 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:13:11.150399  713582 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 11:13:18.152522  713582 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.002219 seconds
	I1123 11:13:18.152644  713582 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 11:13:18.169157  713582 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 11:13:18.697756  713582 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 11:13:18.697976  713582 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-378086 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 11:13:19.212259  713582 kubeadm.go:319] [bootstrap-token] Using token: zdwi6o.1zfp9elryrs7e9mp
	I1123 11:13:19.215222  713582 out.go:252]   - Configuring RBAC rules ...
	I1123 11:13:19.215351  713582 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 11:13:19.221836  713582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 11:13:19.233592  713582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 11:13:19.240103  713582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 11:13:19.244085  713582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 11:13:19.248474  713582 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 11:13:19.264726  713582 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 11:13:19.536133  713582 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 11:13:19.640039  713582 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 11:13:19.641312  713582 kubeadm.go:319] 
	I1123 11:13:19.641379  713582 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 11:13:19.641384  713582 kubeadm.go:319] 
	I1123 11:13:19.641470  713582 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 11:13:19.641475  713582 kubeadm.go:319] 
	I1123 11:13:19.641499  713582 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 11:13:19.641554  713582 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 11:13:19.641604  713582 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 11:13:19.641608  713582 kubeadm.go:319] 
	I1123 11:13:19.641658  713582 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 11:13:19.641662  713582 kubeadm.go:319] 
	I1123 11:13:19.641706  713582 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 11:13:19.641710  713582 kubeadm.go:319] 
	I1123 11:13:19.641759  713582 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 11:13:19.641829  713582 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 11:13:19.641893  713582 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 11:13:19.641897  713582 kubeadm.go:319] 
	I1123 11:13:19.641976  713582 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 11:13:19.642049  713582 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 11:13:19.642053  713582 kubeadm.go:319] 
	I1123 11:13:19.642137  713582 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zdwi6o.1zfp9elryrs7e9mp \
	I1123 11:13:19.642253  713582 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 11:13:19.642273  713582 kubeadm.go:319] 	--control-plane 
	I1123 11:13:19.642276  713582 kubeadm.go:319] 
	I1123 11:13:19.642355  713582 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 11:13:19.642359  713582 kubeadm.go:319] 
	I1123 11:13:19.642436  713582 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zdwi6o.1zfp9elryrs7e9mp \
	I1123 11:13:19.642532  713582 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 11:13:19.645817  713582 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 11:13:19.645934  713582 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 11:13:19.645955  713582 cni.go:84] Creating CNI manager for ""
	I1123 11:13:19.645963  713582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:13:19.652128  713582 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 11:13:19.655012  713582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 11:13:19.659664  713582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 11:13:19.659683  713582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 11:13:19.685260  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 11:13:20.719599  713582 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.03430433s)
	I1123 11:13:20.719643  713582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 11:13:20.719745  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:20.719755  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-378086 minikube.k8s.io/updated_at=2025_11_23T11_13_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=old-k8s-version-378086 minikube.k8s.io/primary=true
	I1123 11:13:20.735727  713582 ops.go:34] apiserver oom_adj: -16
	I1123 11:13:20.922817  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:21.423942  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:21.923513  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:22.423660  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:22.923258  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:23.423508  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:23.922911  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:24.422858  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:24.923889  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:25.423397  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:25.923123  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:26.423180  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:26.923839  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:27.423240  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:27.923433  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:28.423779  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:28.923795  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:29.423900  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:29.923536  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:30.423456  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:30.923705  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:31.423756  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:31.923678  713582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:13:32.037559  713582 kubeadm.go:1114] duration metric: took 11.317872064s to wait for elevateKubeSystemPrivileges
	I1123 11:13:32.037588  713582 kubeadm.go:403] duration metric: took 28.337429766s to StartCluster
	I1123 11:13:32.037605  713582 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:32.037670  713582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:13:32.038608  713582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:13:32.038823  713582 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:13:32.038992  713582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 11:13:32.039240  713582 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:13:32.039275  713582 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:13:32.039330  713582 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-378086"
	I1123 11:13:32.039344  713582 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-378086"
	I1123 11:13:32.039365  713582 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:13:32.039914  713582 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-378086"
	I1123 11:13:32.039937  713582 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-378086"
	I1123 11:13:32.040219  713582 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:13:32.040753  713582 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:13:32.043254  713582 out.go:179] * Verifying Kubernetes components...
	I1123 11:13:32.047740  713582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:13:32.082056  713582 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-378086"
	I1123 11:13:32.082094  713582 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:13:32.082519  713582 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:13:32.091942  713582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:13:32.094056  713582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:13:32.094078  713582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:13:32.094144  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:13:32.123642  713582 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:13:32.123671  713582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:13:32.123745  713582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:13:32.129631  713582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:13:32.151111  713582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:13:32.402522  713582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 11:13:32.409248  713582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:13:32.444985  713582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:13:32.456597  713582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:13:33.062191  713582 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 11:13:33.296139  713582 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-378086" to be "Ready" ...
	I1123 11:13:33.310841  713582 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 11:13:33.313913  713582 addons.go:530] duration metric: took 1.274622961s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 11:13:33.567271  713582 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-378086" context rescaled to 1 replicas
	W1123 11:13:35.300092  713582 node_ready.go:57] node "old-k8s-version-378086" has "Ready":"False" status (will retry)
	W1123 11:13:37.300253  713582 node_ready.go:57] node "old-k8s-version-378086" has "Ready":"False" status (will retry)
	W1123 11:13:39.799763  713582 node_ready.go:57] node "old-k8s-version-378086" has "Ready":"False" status (will retry)
	W1123 11:13:41.800646  713582 node_ready.go:57] node "old-k8s-version-378086" has "Ready":"False" status (will retry)
	W1123 11:13:44.299479  713582 node_ready.go:57] node "old-k8s-version-378086" has "Ready":"False" status (will retry)
	W1123 11:13:46.799467  713582 node_ready.go:57] node "old-k8s-version-378086" has "Ready":"False" status (will retry)
	I1123 11:13:47.301053  713582 node_ready.go:49] node "old-k8s-version-378086" is "Ready"
	I1123 11:13:47.301077  713582 node_ready.go:38] duration metric: took 14.004906789s for node "old-k8s-version-378086" to be "Ready" ...
	I1123 11:13:47.301092  713582 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:13:47.301152  713582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:13:47.334317  713582 api_server.go:72] duration metric: took 15.295466254s to wait for apiserver process to appear ...
	I1123 11:13:47.334343  713582 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:13:47.334363  713582 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 11:13:47.343733  713582 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 11:13:47.345150  713582 api_server.go:141] control plane version: v1.28.0
	I1123 11:13:47.345176  713582 api_server.go:131] duration metric: took 10.82523ms to wait for apiserver health ...
	I1123 11:13:47.345187  713582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:13:47.348876  713582 system_pods.go:59] 8 kube-system pods found
	I1123 11:13:47.348917  713582 system_pods.go:61] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:13:47.348924  713582 system_pods.go:61] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running
	I1123 11:13:47.348930  713582 system_pods.go:61] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:13:47.348938  713582 system_pods.go:61] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running
	I1123 11:13:47.348943  713582 system_pods.go:61] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running
	I1123 11:13:47.348947  713582 system_pods.go:61] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:13:47.348950  713582 system_pods.go:61] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running
	I1123 11:13:47.348956  713582 system_pods.go:61] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:13:47.348962  713582 system_pods.go:74] duration metric: took 3.769202ms to wait for pod list to return data ...
	I1123 11:13:47.348978  713582 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:13:47.351311  713582 default_sa.go:45] found service account: "default"
	I1123 11:13:47.351338  713582 default_sa.go:55] duration metric: took 2.35399ms for default service account to be created ...
	I1123 11:13:47.351348  713582 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:13:47.355404  713582 system_pods.go:86] 8 kube-system pods found
	I1123 11:13:47.355438  713582 system_pods.go:89] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:13:47.355445  713582 system_pods.go:89] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running
	I1123 11:13:47.355452  713582 system_pods.go:89] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:13:47.355457  713582 system_pods.go:89] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running
	I1123 11:13:47.355463  713582 system_pods.go:89] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running
	I1123 11:13:47.355467  713582 system_pods.go:89] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:13:47.355472  713582 system_pods.go:89] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running
	I1123 11:13:47.355482  713582 system_pods.go:89] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:13:47.355517  713582 retry.go:31] will retry after 260.498034ms: missing components: kube-dns
	I1123 11:13:47.620573  713582 system_pods.go:86] 8 kube-system pods found
	I1123 11:13:47.620610  713582 system_pods.go:89] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:13:47.620617  713582 system_pods.go:89] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running
	I1123 11:13:47.620651  713582 system_pods.go:89] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:13:47.620661  713582 system_pods.go:89] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running
	I1123 11:13:47.620666  713582 system_pods.go:89] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running
	I1123 11:13:47.620675  713582 system_pods.go:89] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:13:47.620679  713582 system_pods.go:89] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running
	I1123 11:13:47.620687  713582 system_pods.go:89] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:13:47.620715  713582 retry.go:31] will retry after 256.934836ms: missing components: kube-dns
	I1123 11:13:47.893135  713582 system_pods.go:86] 8 kube-system pods found
	I1123 11:13:47.893162  713582 system_pods.go:89] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Running
	I1123 11:13:47.893172  713582 system_pods.go:89] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running
	I1123 11:13:47.893176  713582 system_pods.go:89] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:13:47.893182  713582 system_pods.go:89] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running
	I1123 11:13:47.893187  713582 system_pods.go:89] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running
	I1123 11:13:47.893191  713582 system_pods.go:89] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:13:47.893194  713582 system_pods.go:89] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running
	I1123 11:13:47.893198  713582 system_pods.go:89] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Running
	I1123 11:13:47.893206  713582 system_pods.go:126] duration metric: took 541.852897ms to wait for k8s-apps to be running ...
	I1123 11:13:47.893213  713582 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:13:47.893273  713582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:13:47.911028  713582 system_svc.go:56] duration metric: took 17.80453ms WaitForService to wait for kubelet
	I1123 11:13:47.911057  713582 kubeadm.go:587] duration metric: took 15.872210607s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:13:47.911075  713582 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:13:47.915569  713582 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:13:47.915655  713582 node_conditions.go:123] node cpu capacity is 2
	I1123 11:13:47.915684  713582 node_conditions.go:105] duration metric: took 4.602402ms to run NodePressure ...
	I1123 11:13:47.915734  713582 start.go:242] waiting for startup goroutines ...
	I1123 11:13:47.915758  713582 start.go:247] waiting for cluster config update ...
	I1123 11:13:47.915783  713582 start.go:256] writing updated cluster config ...
	I1123 11:13:47.916128  713582 ssh_runner.go:195] Run: rm -f paused
	I1123 11:13:47.920802  713582 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:13:47.925305  713582 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lr4ln" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:47.930532  713582 pod_ready.go:94] pod "coredns-5dd5756b68-lr4ln" is "Ready"
	I1123 11:13:47.930564  713582 pod_ready.go:86] duration metric: took 5.235173ms for pod "coredns-5dd5756b68-lr4ln" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:47.934285  713582 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:47.939311  713582 pod_ready.go:94] pod "etcd-old-k8s-version-378086" is "Ready"
	I1123 11:13:47.939338  713582 pod_ready.go:86] duration metric: took 5.021049ms for pod "etcd-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:47.942708  713582 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:47.947958  713582 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-378086" is "Ready"
	I1123 11:13:47.947984  713582 pod_ready.go:86] duration metric: took 5.24945ms for pod "kube-apiserver-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:47.951467  713582 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:48.324764  713582 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-378086" is "Ready"
	I1123 11:13:48.324794  713582 pod_ready.go:86] duration metric: took 373.302971ms for pod "kube-controller-manager-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:48.525704  713582 pod_ready.go:83] waiting for pod "kube-proxy-p546f" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:48.925176  713582 pod_ready.go:94] pod "kube-proxy-p546f" is "Ready"
	I1123 11:13:48.925207  713582 pod_ready.go:86] duration metric: took 399.477357ms for pod "kube-proxy-p546f" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:49.125975  713582 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:49.525772  713582 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-378086" is "Ready"
	I1123 11:13:49.525848  713582 pod_ready.go:86] duration metric: took 399.845535ms for pod "kube-scheduler-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:13:49.525873  713582 pod_ready.go:40] duration metric: took 1.605029501s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:13:49.589645  713582 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 11:13:49.592540  713582 out.go:203] 
	W1123 11:13:49.595506  713582 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 11:13:49.598446  713582 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 11:13:49.602302  713582 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-378086" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:13:47 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:47.458585948Z" level=info msg="Started container" PID=1932 containerID=fd2367eb7b54329d03e79ca9c219495a6a16f9c4a1433fd6cdaa48071e9f236b description=kube-system/coredns-5dd5756b68-lr4ln/coredns id=dbdaa76b-8f29-43dd-b41d-48b51b5ee7bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=da7749a192c1f9da679472d095ec6995cd51fc9e0e3bb80fe980148a0e0a7ca5
	Nov 23 11:13:47 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:47.458627491Z" level=info msg="Starting container: 3fd7926bdf09caba5cc7fbd701db44681b685840795bd8c6c2adfa2c9562355d" id=64c5f017-cd89-4614-bbe8-4a60fcf41e91 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:13:47 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:47.46269666Z" level=info msg="Started container" PID=1928 containerID=3fd7926bdf09caba5cc7fbd701db44681b685840795bd8c6c2adfa2c9562355d description=kube-system/storage-provisioner/storage-provisioner id=64c5f017-cd89-4614-bbe8-4a60fcf41e91 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da5b889096cd386cf38468385e66190ae96d65d29638699693f68047db3d8e38
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.164623368Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3c60738c-0531-4a6a-8a7e-97bfa77a2320 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.164701687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.175326553Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec676d7d9c977d8caf38ca3896caab53f08f68a09dd28fc32d8abd5738767718 UID:879f29eb-c272-4f6c-b331-1495c2897434 NetNS:/var/run/netns/2d4f6398-ba06-45b7-8c55-f2cb08afe623 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000cd5188}] Aliases:map[]}"
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.175380453Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.185320356Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec676d7d9c977d8caf38ca3896caab53f08f68a09dd28fc32d8abd5738767718 UID:879f29eb-c272-4f6c-b331-1495c2897434 NetNS:/var/run/netns/2d4f6398-ba06-45b7-8c55-f2cb08afe623 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000cd5188}] Aliases:map[]}"
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.18555523Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.190881834Z" level=info msg="Ran pod sandbox ec676d7d9c977d8caf38ca3896caab53f08f68a09dd28fc32d8abd5738767718 with infra container: default/busybox/POD" id=3c60738c-0531-4a6a-8a7e-97bfa77a2320 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.192218638Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d27c7c18-d67f-4dac-835c-c7dd9bfb7847 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.192409311Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d27c7c18-d67f-4dac-835c-c7dd9bfb7847 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.192493554Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d27c7c18-d67f-4dac-835c-c7dd9bfb7847 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.193708173Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b5e68abc-9405-48b0-9ef8-7e806f7126ff name=/runtime.v1.ImageService/PullImage
	Nov 23 11:13:50 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:50.196987409Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.348986433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b5e68abc-9405-48b0-9ef8-7e806f7126ff name=/runtime.v1.ImageService/PullImage
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.350011882Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c565c211-e709-40bd-907b-b31f7b6edcf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.351428268Z" level=info msg="Creating container: default/busybox/busybox" id=1b73ce27-af19-4828-a06c-15f1b694e594 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.351539802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.356498081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.357094789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.373959706Z" level=info msg="Created container 8a38bccb47e8b8b8bfc2e518d0d9a8a95995c8fac23151b461ebf8ba3afe7963: default/busybox/busybox" id=1b73ce27-af19-4828-a06c-15f1b694e594 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.374821362Z" level=info msg="Starting container: 8a38bccb47e8b8b8bfc2e518d0d9a8a95995c8fac23151b461ebf8ba3afe7963" id=ed91a905-e480-40c1-888c-061c68d2f6cf name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:13:52 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:52.376426879Z" level=info msg="Started container" PID=1995 containerID=8a38bccb47e8b8b8bfc2e518d0d9a8a95995c8fac23151b461ebf8ba3afe7963 description=default/busybox/busybox id=ed91a905-e480-40c1-888c-061c68d2f6cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec676d7d9c977d8caf38ca3896caab53f08f68a09dd28fc32d8abd5738767718
	Nov 23 11:13:59 old-k8s-version-378086 crio[843]: time="2025-11-23T11:13:59.051438708Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8a38bccb47e8b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   ec676d7d9c977       busybox                                          default
	fd2367eb7b543       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   da7749a192c1f       coredns-5dd5756b68-lr4ln                         kube-system
	3fd7926bdf09c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   da5b889096cd3       storage-provisioner                              kube-system
	677cc0791b555       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   702a1090bb540       kindnet-99vxv                                    kube-system
	703cde83fac85       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   fd88c4c93cdd9       kube-proxy-p546f                                 kube-system
	140a2f95e5fd3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   8d979ba485562       kube-controller-manager-old-k8s-version-378086   kube-system
	ad9552ac2076a       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   3e766c4ee1e83       kube-scheduler-old-k8s-version-378086            kube-system
	cef0cbda03bc8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   0ae58c5bffed9       etcd-old-k8s-version-378086                      kube-system
	5ce4dc085204e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   0b56c7395996e       kube-apiserver-old-k8s-version-378086            kube-system
	
	
	==> coredns [fd2367eb7b54329d03e79ca9c219495a6a16f9c4a1433fd6cdaa48071e9f236b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38130 - 136 "HINFO IN 4662451142315851165.2560491584910871568. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034786257s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-378086
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-378086
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-378086
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_13_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:13:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-378086
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:13:50 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:13:50 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:13:50 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:13:50 +0000   Sun, 23 Nov 2025 11:13:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-378086
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                4336eb7a-3e7c-4f09-a2a9-ee819430f43e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-lr4ln                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-378086                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-99vxv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-378086             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-378086    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-p546f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-378086             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x8 over 49s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 49s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 49s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-378086 event: Registered Node old-k8s-version-378086 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-378086 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:49] overlayfs: idmapped layers are currently not supported
	[Nov23 10:53] overlayfs: idmapped layers are currently not supported
	[Nov23 10:54] overlayfs: idmapped layers are currently not supported
	[Nov23 10:55] overlayfs: idmapped layers are currently not supported
	[Nov23 10:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cef0cbda03bc8ea7c39b253e7cf0752e54e549c5c18e18a48d20c7f584aa6b7b] <==
	{"level":"info","ts":"2025-11-23T11:13:12.573735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T11:13:12.574365Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T11:13:12.588986Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T11:13:12.589295Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T11:13:12.589362Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T11:13:12.589451Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T11:13:12.589488Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T11:13:12.937863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T11:13:12.93798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T11:13:12.938019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-23T11:13:12.938072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T11:13:12.938105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T11:13:12.938154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-23T11:13:12.938186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T11:13:12.940699Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:13:12.945639Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-378086 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T11:13:12.945806Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T11:13:12.946991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T11:13:12.947435Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:13:12.947562Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:13:12.947634Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:13:12.947669Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T11:13:12.954194Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-23T11:13:12.955445Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T11:13:12.956247Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:14:01 up  3:56,  0 user,  load average: 3.00, 3.53, 2.78
	Linux old-k8s-version-378086 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [677cc0791b555f48224bc4b9a3e8d2179a918220a72116616341bbb184054b09] <==
	I1123 11:13:36.659849       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:13:36.660077       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:13:36.660203       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:13:36.660222       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:13:36.660236       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:13:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:13:36.862365       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:13:36.862397       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:13:36.862406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:13:36.862694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 11:13:37.162910       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:13:37.162936       1 metrics.go:72] Registering metrics
	I1123 11:13:37.162991       1 controller.go:711] "Syncing nftables rules"
	I1123 11:13:46.863635       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:13:46.863729       1 main.go:301] handling current node
	I1123 11:13:56.861931       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:13:56.861971       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5ce4dc085204e84e4a3d34019673340d35d4f51d22284bf90b400b73708060cb] <==
	I1123 11:13:16.493151       1 aggregator.go:166] initial CRD sync complete...
	I1123 11:13:16.493170       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 11:13:16.493175       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:13:16.493182       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:13:16.513096       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 11:13:16.517217       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:13:16.517372       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 11:13:16.517699       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 11:13:16.517771       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 11:13:16.577615       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:13:17.219843       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 11:13:17.225487       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 11:13:17.225513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:13:17.847410       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:13:17.902382       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:13:17.971999       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 11:13:17.983711       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 11:13:17.985112       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 11:13:17.994736       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:13:18.422367       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 11:13:19.520821       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 11:13:19.534586       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 11:13:19.548948       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 11:13:32.178692       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1123 11:13:32.225254       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [140a2f95e5fd3ffc7e3baf94ca7ac18a94475105803314dba83c6ccac25d8ff7] <==
	I1123 11:13:31.368264       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 11:13:31.491553       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 11:13:31.862529       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 11:13:31.886179       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 11:13:31.886275       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 11:13:32.232529       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-p546f"
	I1123 11:13:32.235012       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-99vxv"
	I1123 11:13:32.260511       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 11:13:32.392808       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bpxqr"
	I1123 11:13:32.470608       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lr4ln"
	I1123 11:13:32.521629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="266.264142ms"
	I1123 11:13:32.587464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.784923ms"
	I1123 11:13:32.587539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.069µs"
	I1123 11:13:32.623593       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.495µs"
	I1123 11:13:33.108083       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 11:13:33.129580       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bpxqr"
	I1123 11:13:33.147664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.180517ms"
	I1123 11:13:33.174362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.650077ms"
	I1123 11:13:33.174437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.714µs"
	I1123 11:13:47.086688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.758µs"
	I1123 11:13:47.107862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.587µs"
	I1123 11:13:47.844134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.92µs"
	I1123 11:13:47.876176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.48189ms"
	I1123 11:13:47.876483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.378µs"
	I1123 11:13:51.242302       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [703cde83fac8579f11f4aa24e0b671cdae3f8d08c14c43f8d4d16cf6ec7fc390] <==
	I1123 11:13:34.549884       1 server_others.go:69] "Using iptables proxy"
	I1123 11:13:34.575098       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 11:13:34.610252       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:13:34.612029       1 server_others.go:152] "Using iptables Proxier"
	I1123 11:13:34.612064       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 11:13:34.612071       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 11:13:34.612104       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 11:13:34.612300       1 server.go:846] "Version info" version="v1.28.0"
	I1123 11:13:34.612324       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:13:34.613600       1 config.go:188] "Starting service config controller"
	I1123 11:13:34.613611       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 11:13:34.613628       1 config.go:97] "Starting endpoint slice config controller"
	I1123 11:13:34.613633       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 11:13:34.613985       1 config.go:315] "Starting node config controller"
	I1123 11:13:34.613992       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 11:13:34.715450       1 shared_informer.go:318] Caches are synced for node config
	I1123 11:13:34.715480       1 shared_informer.go:318] Caches are synced for service config
	I1123 11:13:34.715506       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ad9552ac2076a36cf2f76704a0ecfbbe352dea84775366ef66c7be763df37044] <==
	W1123 11:13:16.519557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 11:13:16.519588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 11:13:16.519723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 11:13:16.519749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 11:13:16.519835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 11:13:16.519848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 11:13:16.520024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 11:13:16.520093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 11:13:16.520113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 11:13:16.520164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 11:13:16.520953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 11:13:16.520991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 11:13:16.521153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 11:13:16.521188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 11:13:16.521293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 11:13:16.522403       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 11:13:16.521323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 11:13:16.522512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 11:13:17.470392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 11:13:17.470439       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 11:13:17.521560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 11:13:17.521674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 11:13:17.522479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 11:13:17.522551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1123 11:13:18.074192       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 11:13:32 old-k8s-version-378086 kubelet[1378]: E1123 11:13:32.295438    1378 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-378086" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-378086' and this object
	Nov 23 11:13:32 old-k8s-version-378086 kubelet[1378]: I1123 11:13:32.386435    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7ac305c-9238-47e4-9fe9-101bcf9865f7-cni-cfg\") pod \"kindnet-99vxv\" (UID: \"f7ac305c-9238-47e4-9fe9-101bcf9865f7\") " pod="kube-system/kindnet-99vxv"
	Nov 23 11:13:32 old-k8s-version-378086 kubelet[1378]: I1123 11:13:32.386510    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7ac305c-9238-47e4-9fe9-101bcf9865f7-xtables-lock\") pod \"kindnet-99vxv\" (UID: \"f7ac305c-9238-47e4-9fe9-101bcf9865f7\") " pod="kube-system/kindnet-99vxv"
	Nov 23 11:13:32 old-k8s-version-378086 kubelet[1378]: I1123 11:13:32.386535    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7ac305c-9238-47e4-9fe9-101bcf9865f7-lib-modules\") pod \"kindnet-99vxv\" (UID: \"f7ac305c-9238-47e4-9fe9-101bcf9865f7\") " pod="kube-system/kindnet-99vxv"
	Nov 23 11:13:32 old-k8s-version-378086 kubelet[1378]: I1123 11:13:32.386574    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgt8\" (UniqueName: \"kubernetes.io/projected/f7ac305c-9238-47e4-9fe9-101bcf9865f7-kube-api-access-chgt8\") pod \"kindnet-99vxv\" (UID: \"f7ac305c-9238-47e4-9fe9-101bcf9865f7\") " pod="kube-system/kindnet-99vxv"
	Nov 23 11:13:33 old-k8s-version-378086 kubelet[1378]: E1123 11:13:33.501875    1378 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:13:33 old-k8s-version-378086 kubelet[1378]: E1123 11:13:33.502360    1378 projected.go:198] Error preparing data for projected volume kube-api-access-pltcb for pod kube-system/kube-proxy-p546f: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:13:33 old-k8s-version-378086 kubelet[1378]: E1123 11:13:33.502519    1378 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0ebea1b-f874-4486-a261-3541f3db2d42-kube-api-access-pltcb podName:c0ebea1b-f874-4486-a261-3541f3db2d42 nodeName:}" failed. No retries permitted until 2025-11-23 11:13:34.002491659 +0000 UTC m=+14.525016879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pltcb" (UniqueName: "kubernetes.io/projected/c0ebea1b-f874-4486-a261-3541f3db2d42-kube-api-access-pltcb") pod "kube-proxy-p546f" (UID: "c0ebea1b-f874-4486-a261-3541f3db2d42") : failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:13:33 old-k8s-version-378086 kubelet[1378]: W1123 11:13:33.813001    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-702a1090bb540e7037a8b7b04e54e52ce11c21b22ae55085b2406f3cbf7695e3 WatchSource:0}: Error finding container 702a1090bb540e7037a8b7b04e54e52ce11c21b22ae55085b2406f3cbf7695e3: Status 404 returned error can't find the container with id 702a1090bb540e7037a8b7b04e54e52ce11c21b22ae55085b2406f3cbf7695e3
	Nov 23 11:13:34 old-k8s-version-378086 kubelet[1378]: I1123 11:13:34.810660    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-p546f" podStartSLOduration=2.81061936 podCreationTimestamp="2025-11-23 11:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:13:34.809581801 +0000 UTC m=+15.332107021" watchObservedRunningTime="2025-11-23 11:13:34.81061936 +0000 UTC m=+15.333144580"
	Nov 23 11:13:39 old-k8s-version-378086 kubelet[1378]: I1123 11:13:39.706981    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-99vxv" podStartSLOduration=5.016023281 podCreationTimestamp="2025-11-23 11:13:32 +0000 UTC" firstStartedPulling="2025-11-23 11:13:33.815883621 +0000 UTC m=+14.338408841" lastFinishedPulling="2025-11-23 11:13:36.506786692 +0000 UTC m=+17.029311904" observedRunningTime="2025-11-23 11:13:36.817688369 +0000 UTC m=+17.340213581" watchObservedRunningTime="2025-11-23 11:13:39.706926344 +0000 UTC m=+20.229451556"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.053150    1378 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.085632    1378 topology_manager.go:215] "Topology Admit Handler" podUID="bb9ae516-3281-45af-9186-d257de3155f0" podNamespace="kube-system" podName="coredns-5dd5756b68-lr4ln"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.093249    1378 topology_manager.go:215] "Topology Admit Handler" podUID="6c2b2474-9610-4bd7-9676-545cf9ec1767" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.194713    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb9ae516-3281-45af-9186-d257de3155f0-config-volume\") pod \"coredns-5dd5756b68-lr4ln\" (UID: \"bb9ae516-3281-45af-9186-d257de3155f0\") " pod="kube-system/coredns-5dd5756b68-lr4ln"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.194774    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4gw\" (UniqueName: \"kubernetes.io/projected/bb9ae516-3281-45af-9186-d257de3155f0-kube-api-access-vn4gw\") pod \"coredns-5dd5756b68-lr4ln\" (UID: \"bb9ae516-3281-45af-9186-d257de3155f0\") " pod="kube-system/coredns-5dd5756b68-lr4ln"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.194801    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6c2b2474-9610-4bd7-9676-545cf9ec1767-tmp\") pod \"storage-provisioner\" (UID: \"6c2b2474-9610-4bd7-9676-545cf9ec1767\") " pod="kube-system/storage-provisioner"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.194829    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngsd\" (UniqueName: \"kubernetes.io/projected/6c2b2474-9610-4bd7-9676-545cf9ec1767-kube-api-access-lngsd\") pod \"storage-provisioner\" (UID: \"6c2b2474-9610-4bd7-9676-545cf9ec1767\") " pod="kube-system/storage-provisioner"
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: W1123 11:13:47.400816    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-da5b889096cd386cf38468385e66190ae96d65d29638699693f68047db3d8e38 WatchSource:0}: Error finding container da5b889096cd386cf38468385e66190ae96d65d29638699693f68047db3d8e38: Status 404 returned error can't find the container with id da5b889096cd386cf38468385e66190ae96d65d29638699693f68047db3d8e38
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: W1123 11:13:47.423262    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-da7749a192c1f9da679472d095ec6995cd51fc9e0e3bb80fe980148a0e0a7ca5 WatchSource:0}: Error finding container da7749a192c1f9da679472d095ec6995cd51fc9e0e3bb80fe980148a0e0a7ca5: Status 404 returned error can't find the container with id da7749a192c1f9da679472d095ec6995cd51fc9e0e3bb80fe980148a0e0a7ca5
	Nov 23 11:13:47 old-k8s-version-378086 kubelet[1378]: I1123 11:13:47.841099    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lr4ln" podStartSLOduration=15.841055569 podCreationTimestamp="2025-11-23 11:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:13:47.840254739 +0000 UTC m=+28.362779951" watchObservedRunningTime="2025-11-23 11:13:47.841055569 +0000 UTC m=+28.363580830"
	Nov 23 11:13:49 old-k8s-version-378086 kubelet[1378]: I1123 11:13:49.862066    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.862021867 podCreationTimestamp="2025-11-23 11:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:13:47.895945707 +0000 UTC m=+28.418470918" watchObservedRunningTime="2025-11-23 11:13:49.862021867 +0000 UTC m=+30.384547087"
	Nov 23 11:13:49 old-k8s-version-378086 kubelet[1378]: I1123 11:13:49.862479    1378 topology_manager.go:215] "Topology Admit Handler" podUID="879f29eb-c272-4f6c-b331-1495c2897434" podNamespace="default" podName="busybox"
	Nov 23 11:13:49 old-k8s-version-378086 kubelet[1378]: I1123 11:13:49.913772    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8w7\" (UniqueName: \"kubernetes.io/projected/879f29eb-c272-4f6c-b331-1495c2897434-kube-api-access-qf8w7\") pod \"busybox\" (UID: \"879f29eb-c272-4f6c-b331-1495c2897434\") " pod="default/busybox"
	Nov 23 11:13:50 old-k8s-version-378086 kubelet[1378]: W1123 11:13:50.187464    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-ec676d7d9c977d8caf38ca3896caab53f08f68a09dd28fc32d8abd5738767718 WatchSource:0}: Error finding container ec676d7d9c977d8caf38ca3896caab53f08f68a09dd28fc32d8abd5738767718: Status 404 returned error can't find the container with id ec676d7d9c977d8caf38ca3896caab53f08f68a09dd28fc32d8abd5738767718
	
	
	==> storage-provisioner [3fd7926bdf09caba5cc7fbd701db44681b685840795bd8c6c2adfa2c9562355d] <==
	I1123 11:13:47.486266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:13:47.501277       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:13:47.501446       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 11:13:47.509493       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:13:47.509888       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-378086_c68b4807-0a8a-43a9-bfca-1ba4e3bd9bee!
	I1123 11:13:47.510570       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96c35a90-0779-45d9-8ae6-4ff1ea7116b2", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-378086_c68b4807-0a8a-43a9-bfca-1ba4e3bd9bee became leader
	I1123 11:13:47.611383       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-378086_c68b4807-0a8a-43a9-bfca-1ba4e3bd9bee!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-378086 -n old-k8s-version-378086
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-378086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-378086 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-378086 --alsologtostderr -v=1: exit status 80 (2.295999485s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-378086 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:15:21.148020  719494 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:15:21.148239  719494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:15:21.148276  719494 out.go:374] Setting ErrFile to fd 2...
	I1123 11:15:21.148297  719494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:15:21.148575  719494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:15:21.148901  719494 out.go:368] Setting JSON to false
	I1123 11:15:21.148960  719494 mustload.go:66] Loading cluster: old-k8s-version-378086
	I1123 11:15:21.149389  719494 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:15:21.149968  719494 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:15:21.168010  719494 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:15:21.168415  719494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:15:21.226148  719494 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 11:15:21.215329992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:15:21.226841  719494 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-378086 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 11:15:21.230430  719494 out.go:179] * Pausing node old-k8s-version-378086 ... 
	I1123 11:15:21.233365  719494 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:15:21.233816  719494 ssh_runner.go:195] Run: systemctl --version
	I1123 11:15:21.233869  719494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:15:21.252947  719494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:15:21.360128  719494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:15:21.378031  719494 pause.go:52] kubelet running: true
	I1123 11:15:21.378109  719494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:15:21.624608  719494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:15:21.624706  719494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:15:21.704234  719494 cri.go:89] found id: "72eefe4998ad926e91ba0b4aeaa70f2824e1d1d4509369827c4a7c5dda6c05e4"
	I1123 11:15:21.704276  719494 cri.go:89] found id: "3c42b937421338466200e60e96d69686288069898351e5d8bd5f9d3a6dcfe764"
	I1123 11:15:21.704281  719494 cri.go:89] found id: "df6da468794be21cefbc6cb802bef7733829bfed7b575a64f34d2e62f4b2d0db"
	I1123 11:15:21.704285  719494 cri.go:89] found id: "6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada"
	I1123 11:15:21.704288  719494 cri.go:89] found id: "41652b70682024357c15f7e082dfdfdb23f995e78049b69dcbd577a6cfe04c4a"
	I1123 11:15:21.704291  719494 cri.go:89] found id: "e72df448160ac085b2167283e8c8a22496db5a4654f14b4aee7f1b6b959124f9"
	I1123 11:15:21.704294  719494 cri.go:89] found id: "8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679"
	I1123 11:15:21.704297  719494 cri.go:89] found id: "6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf"
	I1123 11:15:21.704301  719494 cri.go:89] found id: "0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae"
	I1123 11:15:21.704308  719494 cri.go:89] found id: "070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	I1123 11:15:21.704311  719494 cri.go:89] found id: "4c5050d05088c8d4aa155ed1ef8c68b82a7e47e3df5aea08651a337b5ecd164f"
	I1123 11:15:21.704314  719494 cri.go:89] found id: ""
	I1123 11:15:21.704371  719494 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:15:21.726137  719494 retry.go:31] will retry after 152.065218ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:15:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:15:21.878529  719494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:15:21.895930  719494 pause.go:52] kubelet running: false
	I1123 11:15:21.896097  719494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:15:22.133990  719494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:15:22.134136  719494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:15:22.222095  719494 cri.go:89] found id: "72eefe4998ad926e91ba0b4aeaa70f2824e1d1d4509369827c4a7c5dda6c05e4"
	I1123 11:15:22.222119  719494 cri.go:89] found id: "3c42b937421338466200e60e96d69686288069898351e5d8bd5f9d3a6dcfe764"
	I1123 11:15:22.222124  719494 cri.go:89] found id: "df6da468794be21cefbc6cb802bef7733829bfed7b575a64f34d2e62f4b2d0db"
	I1123 11:15:22.222128  719494 cri.go:89] found id: "6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada"
	I1123 11:15:22.222131  719494 cri.go:89] found id: "41652b70682024357c15f7e082dfdfdb23f995e78049b69dcbd577a6cfe04c4a"
	I1123 11:15:22.222134  719494 cri.go:89] found id: "e72df448160ac085b2167283e8c8a22496db5a4654f14b4aee7f1b6b959124f9"
	I1123 11:15:22.222137  719494 cri.go:89] found id: "8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679"
	I1123 11:15:22.222165  719494 cri.go:89] found id: "6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf"
	I1123 11:15:22.222170  719494 cri.go:89] found id: "0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae"
	I1123 11:15:22.222176  719494 cri.go:89] found id: "070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	I1123 11:15:22.222187  719494 cri.go:89] found id: "4c5050d05088c8d4aa155ed1ef8c68b82a7e47e3df5aea08651a337b5ecd164f"
	I1123 11:15:22.222190  719494 cri.go:89] found id: ""
	I1123 11:15:22.222252  719494 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:15:22.234656  719494 retry.go:31] will retry after 215.498775ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:15:22Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:15:22.451174  719494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:15:22.466296  719494 pause.go:52] kubelet running: false
	I1123 11:15:22.466360  719494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:15:22.646289  719494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:15:22.646387  719494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:15:22.719198  719494 cri.go:89] found id: "72eefe4998ad926e91ba0b4aeaa70f2824e1d1d4509369827c4a7c5dda6c05e4"
	I1123 11:15:22.719229  719494 cri.go:89] found id: "3c42b937421338466200e60e96d69686288069898351e5d8bd5f9d3a6dcfe764"
	I1123 11:15:22.719235  719494 cri.go:89] found id: "df6da468794be21cefbc6cb802bef7733829bfed7b575a64f34d2e62f4b2d0db"
	I1123 11:15:22.719245  719494 cri.go:89] found id: "6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada"
	I1123 11:15:22.719249  719494 cri.go:89] found id: "41652b70682024357c15f7e082dfdfdb23f995e78049b69dcbd577a6cfe04c4a"
	I1123 11:15:22.719253  719494 cri.go:89] found id: "e72df448160ac085b2167283e8c8a22496db5a4654f14b4aee7f1b6b959124f9"
	I1123 11:15:22.719256  719494 cri.go:89] found id: "8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679"
	I1123 11:15:22.719259  719494 cri.go:89] found id: "6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf"
	I1123 11:15:22.719262  719494 cri.go:89] found id: "0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae"
	I1123 11:15:22.719268  719494 cri.go:89] found id: "070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	I1123 11:15:22.719274  719494 cri.go:89] found id: "4c5050d05088c8d4aa155ed1ef8c68b82a7e47e3df5aea08651a337b5ecd164f"
	I1123 11:15:22.719277  719494 cri.go:89] found id: ""
	I1123 11:15:22.719333  719494 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:15:22.731030  719494 retry.go:31] will retry after 346.049908ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:15:22Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:15:23.077635  719494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:15:23.093327  719494 pause.go:52] kubelet running: false
	I1123 11:15:23.093500  719494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:15:23.275105  719494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:15:23.275218  719494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:15:23.345338  719494 cri.go:89] found id: "72eefe4998ad926e91ba0b4aeaa70f2824e1d1d4509369827c4a7c5dda6c05e4"
	I1123 11:15:23.345365  719494 cri.go:89] found id: "3c42b937421338466200e60e96d69686288069898351e5d8bd5f9d3a6dcfe764"
	I1123 11:15:23.345370  719494 cri.go:89] found id: "df6da468794be21cefbc6cb802bef7733829bfed7b575a64f34d2e62f4b2d0db"
	I1123 11:15:23.345374  719494 cri.go:89] found id: "6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada"
	I1123 11:15:23.345378  719494 cri.go:89] found id: "41652b70682024357c15f7e082dfdfdb23f995e78049b69dcbd577a6cfe04c4a"
	I1123 11:15:23.345382  719494 cri.go:89] found id: "e72df448160ac085b2167283e8c8a22496db5a4654f14b4aee7f1b6b959124f9"
	I1123 11:15:23.345385  719494 cri.go:89] found id: "8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679"
	I1123 11:15:23.345388  719494 cri.go:89] found id: "6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf"
	I1123 11:15:23.345391  719494 cri.go:89] found id: "0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae"
	I1123 11:15:23.345399  719494 cri.go:89] found id: "070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	I1123 11:15:23.345402  719494 cri.go:89] found id: "4c5050d05088c8d4aa155ed1ef8c68b82a7e47e3df5aea08651a337b5ecd164f"
	I1123 11:15:23.345432  719494 cri.go:89] found id: ""
	I1123 11:15:23.345488  719494 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:15:23.365033  719494 out.go:203] 
	W1123 11:15:23.369179  719494 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:15:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:15:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 11:15:23.369222  719494 out.go:285] * 
	* 
	W1123 11:15:23.377880  719494 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 11:15:23.382256  719494 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-378086 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-378086
helpers_test.go:243: (dbg) docker inspect old-k8s-version-378086:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388",
	        "Created": "2025-11-23T11:12:54.956037881Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:14:14.48359006Z",
	            "FinishedAt": "2025-11-23T11:14:13.641933321Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/hostname",
	        "HostsPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/hosts",
	        "LogPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388-json.log",
	        "Name": "/old-k8s-version-378086",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378086:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378086",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388",
	                "LowerDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378086",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378086/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378086",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378086",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378086",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d1ae3fb3ac157d181e4bd1ea430ee92bfbf1b1b7ce8fb3c080323cb391c39ac0",
	            "SandboxKey": "/var/run/docker/netns/d1ae3fb3ac15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378086": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:85:9a:42:9c:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad991492cc1b5405599bff7adffac92b2e633269fafa0d884a2cf0b41e4105f6",
	                    "EndpointID": "5a985961deac3a21499a26ac6888b34c32e4515e1fe15f2a406486bce260a115",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-378086",
	                        "c67933f5eb0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086: exit status 2 (374.727745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-378086 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-378086 logs -n 25: (1.491975298s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-344709 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo containerd config dump                                                                                                                                                                                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo crio config                                                                                                                                                                                                             │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p cilium-344709                                                                                                                                                                                                                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ pause   │ -p pause-851396 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p pause-851396                                                                                                                                                                                                                               │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p force-systemd-env-613417                                                                                                                                                                                                                   │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p cert-options-700578 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ cert-options-700578 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ -p cert-options-700578 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p cert-options-700578                                                                                                                                                                                                                        │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:13 UTC │                     │
	│ stop    │ -p old-k8s-version-378086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-378086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:14:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:14:14.192039  717171 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:14:14.192213  717171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:14:14.192225  717171 out.go:374] Setting ErrFile to fd 2...
	I1123 11:14:14.192231  717171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:14:14.192463  717171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:14:14.192839  717171 out.go:368] Setting JSON to false
	I1123 11:14:14.193823  717171 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14203,"bootTime":1763882251,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:14:14.193935  717171 start.go:143] virtualization:  
	I1123 11:14:14.196931  717171 out.go:179] * [old-k8s-version-378086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:14:14.200953  717171 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:14:14.201109  717171 notify.go:221] Checking for updates...
	I1123 11:14:14.206892  717171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:14:14.209895  717171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:14:14.212742  717171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:14:14.215522  717171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:14:14.218407  717171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:14:14.221889  717171 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:14:14.225285  717171 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 11:14:14.228364  717171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:14:14.253161  717171 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:14:14.253272  717171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:14:14.319685  717171 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:14:14.308823646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:14:14.319782  717171 docker.go:319] overlay module found
	I1123 11:14:14.322855  717171 out.go:179] * Using the docker driver based on existing profile
	I1123 11:14:14.325666  717171 start.go:309] selected driver: docker
	I1123 11:14:14.325687  717171 start.go:927] validating driver "docker" against &{Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:14:14.325783  717171 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:14:14.326542  717171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:14:14.387979  717171 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:14:14.378784907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:14:14.388383  717171 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:14:14.388421  717171 cni.go:84] Creating CNI manager for ""
	I1123 11:14:14.388487  717171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:14:14.388532  717171 start.go:353] cluster config:
	{Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:14:14.393509  717171 out.go:179] * Starting "old-k8s-version-378086" primary control-plane node in "old-k8s-version-378086" cluster
	I1123 11:14:14.396282  717171 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:14:14.399106  717171 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:14:14.401882  717171 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 11:14:14.401938  717171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 11:14:14.401950  717171 cache.go:65] Caching tarball of preloaded images
	I1123 11:14:14.401952  717171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:14:14.402031  717171 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:14:14.402041  717171 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 11:14:14.402159  717171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/config.json ...
	I1123 11:14:14.421918  717171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:14:14.421941  717171 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:14:14.421961  717171 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:14:14.421992  717171 start.go:360] acquireMachinesLock for old-k8s-version-378086: {Name:mkfc344308e200b270c60104d70fe97a5903afde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:14:14.422060  717171 start.go:364] duration metric: took 45.777µs to acquireMachinesLock for "old-k8s-version-378086"
	I1123 11:14:14.422084  717171 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:14:14.422093  717171 fix.go:54] fixHost starting: 
	I1123 11:14:14.422349  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:14.439252  717171 fix.go:112] recreateIfNeeded on old-k8s-version-378086: state=Stopped err=<nil>
	W1123 11:14:14.439295  717171 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 11:14:14.442453  717171 out.go:252] * Restarting existing docker container for "old-k8s-version-378086" ...
	I1123 11:14:14.442536  717171 cli_runner.go:164] Run: docker start old-k8s-version-378086
	I1123 11:14:14.739502  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:14.763635  717171 kic.go:430] container "old-k8s-version-378086" state is running.
	I1123 11:14:14.764027  717171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:14:14.793348  717171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/config.json ...
	I1123 11:14:14.793675  717171 machine.go:94] provisionDockerMachine start ...
	I1123 11:14:14.793755  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:14.828089  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:14.828422  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:14.828432  717171 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:14:14.829125  717171 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:14:17.986148  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378086
	
	I1123 11:14:17.986170  717171 ubuntu.go:182] provisioning hostname "old-k8s-version-378086"
	I1123 11:14:17.986233  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.007679  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:18.007999  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:18.008016  717171 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-378086 && echo "old-k8s-version-378086" | sudo tee /etc/hostname
	I1123 11:14:18.171893  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378086
	
	I1123 11:14:18.172090  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.189370  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:18.189718  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:18.189740  717171 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-378086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-378086/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-378086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:14:18.341919  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:14:18.341944  717171 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:14:18.342003  717171 ubuntu.go:190] setting up certificates
	I1123 11:14:18.342018  717171 provision.go:84] configureAuth start
	I1123 11:14:18.342102  717171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:14:18.359201  717171 provision.go:143] copyHostCerts
	I1123 11:14:18.359280  717171 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:14:18.359307  717171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:14:18.359387  717171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:14:18.359494  717171 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:14:18.359505  717171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:14:18.359533  717171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:14:18.359599  717171 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:14:18.359609  717171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:14:18.359638  717171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:14:18.359697  717171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-378086 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-378086]
	I1123 11:14:18.741468  717171 provision.go:177] copyRemoteCerts
	I1123 11:14:18.741553  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:14:18.741600  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.759037  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:18.869171  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:14:18.888574  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 11:14:18.906853  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:14:18.924105  717171 provision.go:87] duration metric: took 582.057132ms to configureAuth
	I1123 11:14:18.924137  717171 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:14:18.924336  717171 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:14:18.924441  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.944603  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:18.944942  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:18.944969  717171 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:14:19.313049  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:14:19.313070  717171 machine.go:97] duration metric: took 4.519376787s to provisionDockerMachine
	I1123 11:14:19.313081  717171 start.go:293] postStartSetup for "old-k8s-version-378086" (driver="docker")
	I1123 11:14:19.313091  717171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:14:19.313147  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:14:19.313186  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.337006  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.445586  717171 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:14:19.449316  717171 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:14:19.449345  717171 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:14:19.449358  717171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:14:19.449444  717171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:14:19.449546  717171 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:14:19.449651  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:14:19.457145  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:14:19.475176  717171 start.go:296] duration metric: took 162.072108ms for postStartSetup
	I1123 11:14:19.475301  717171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:14:19.475377  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.493564  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.594909  717171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:14:19.600150  717171 fix.go:56] duration metric: took 5.178050324s for fixHost
	I1123 11:14:19.600176  717171 start.go:83] releasing machines lock for "old-k8s-version-378086", held for 5.178103593s
	I1123 11:14:19.600244  717171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:14:19.619447  717171 ssh_runner.go:195] Run: cat /version.json
	I1123 11:14:19.619506  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.619773  717171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:14:19.619839  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.638951  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.645573  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.749113  717171 ssh_runner.go:195] Run: systemctl --version
	I1123 11:14:19.852827  717171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:14:19.888260  717171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:14:19.893510  717171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:14:19.893587  717171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:14:19.901256  717171 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:14:19.901322  717171 start.go:496] detecting cgroup driver to use...
	I1123 11:14:19.901359  717171 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:14:19.901452  717171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:14:19.916626  717171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:14:19.929717  717171 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:14:19.929826  717171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:14:19.945499  717171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:14:19.958804  717171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:14:20.089984  717171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:14:20.218270  717171 docker.go:234] disabling docker service ...
	I1123 11:14:20.218337  717171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:14:20.234035  717171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:14:20.250390  717171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:14:20.399023  717171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:14:20.520962  717171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:14:20.533836  717171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:14:20.548229  717171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 11:14:20.548348  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.557603  717171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:14:20.557723  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.567225  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.576188  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.585560  717171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:14:20.593868  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.603143  717171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.612353  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.621787  717171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:14:20.629447  717171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:14:20.636902  717171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:14:20.754045  717171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:14:20.945565  717171 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:14:20.945634  717171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:14:20.949487  717171 start.go:564] Will wait 60s for crictl version
	I1123 11:14:20.949603  717171 ssh_runner.go:195] Run: which crictl
	I1123 11:14:20.953116  717171 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:14:20.978958  717171 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:14:20.979042  717171 ssh_runner.go:195] Run: crio --version
	I1123 11:14:21.020512  717171 ssh_runner.go:195] Run: crio --version
	I1123 11:14:21.055441  717171 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 11:14:21.058344  717171 cli_runner.go:164] Run: docker network inspect old-k8s-version-378086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:14:21.074917  717171 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:14:21.078950  717171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:14:21.088409  717171 kubeadm.go:884] updating cluster {Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:14:21.088545  717171 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 11:14:21.088601  717171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:14:21.127975  717171 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:14:21.128046  717171 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:14:21.128132  717171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:14:21.158064  717171 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:14:21.158089  717171 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:14:21.158098  717171 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1123 11:14:21.158201  717171 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-378086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:14:21.158287  717171 ssh_runner.go:195] Run: crio config
	I1123 11:14:21.229596  717171 cni.go:84] Creating CNI manager for ""
	I1123 11:14:21.229617  717171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:14:21.229642  717171 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:14:21.229666  717171 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-378086 NodeName:old-k8s-version-378086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:14:21.229810  717171 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-378086"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:14:21.229888  717171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 11:14:21.238703  717171 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:14:21.238779  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:14:21.246753  717171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 11:14:21.260079  717171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:14:21.277691  717171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 11:14:21.290417  717171 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:14:21.294222  717171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:14:21.303902  717171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:14:21.414122  717171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:14:21.431448  717171 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086 for IP: 192.168.85.2
	I1123 11:14:21.431470  717171 certs.go:195] generating shared ca certs ...
	I1123 11:14:21.431486  717171 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:21.431696  717171 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:14:21.431771  717171 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:14:21.431785  717171 certs.go:257] generating profile certs ...
	I1123 11:14:21.431907  717171 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.key
	I1123 11:14:21.432001  717171 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key.0966a661
	I1123 11:14:21.432083  717171 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key
	I1123 11:14:21.432219  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:14:21.432272  717171 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:14:21.432288  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:14:21.432333  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:14:21.432382  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:14:21.432415  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:14:21.432480  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:14:21.433133  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:14:21.458844  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:14:21.480229  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:14:21.503073  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:14:21.524911  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 11:14:21.545290  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:14:21.564242  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:14:21.587154  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:14:21.621040  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:14:21.643136  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:14:21.667542  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:14:21.699052  717171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:14:21.712945  717171 ssh_runner.go:195] Run: openssl version
	I1123 11:14:21.719250  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:14:21.728469  717171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:14:21.732205  717171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:14:21.732322  717171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:14:21.773958  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:14:21.782960  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:14:21.791229  717171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:14:21.795413  717171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:14:21.795524  717171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:14:21.839437  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:14:21.847359  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:14:21.855935  717171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:14:21.859799  717171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:14:21.859901  717171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:14:21.901510  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:14:21.909566  717171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:14:21.913328  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:14:21.955087  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:14:21.996185  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:14:22.038351  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:14:22.087239  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:14:22.133136  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:14:22.179750  717171 kubeadm.go:401] StartCluster: {Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:14:22.179895  717171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:14:22.179989  717171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:14:22.271040  717171 cri.go:89] found id: "8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679"
	I1123 11:14:22.271124  717171 cri.go:89] found id: "6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf"
	I1123 11:14:22.271145  717171 cri.go:89] found id: "0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae"
	I1123 11:14:22.271173  717171 cri.go:89] found id: ""
	I1123 11:14:22.271242  717171 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:14:22.302571  717171 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:14:22.302700  717171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:14:22.325849  717171 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:14:22.325922  717171 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:14:22.325997  717171 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:14:22.338870  717171 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:14:22.339542  717171 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-378086" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:14:22.339845  717171 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-378086" cluster setting kubeconfig missing "old-k8s-version-378086" context setting]
	I1123 11:14:22.340331  717171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:22.342056  717171 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:14:22.356680  717171 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 11:14:22.356756  717171 kubeadm.go:602] duration metric: took 30.814936ms to restartPrimaryControlPlane
	I1123 11:14:22.356791  717171 kubeadm.go:403] duration metric: took 177.040026ms to StartCluster
	I1123 11:14:22.356825  717171 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:22.356911  717171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:14:22.357977  717171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:22.358232  717171 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:14:22.358641  717171 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:14:22.358659  717171 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:14:22.358926  717171 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-378086"
	I1123 11:14:22.358978  717171 addons.go:70] Setting dashboard=true in profile "old-k8s-version-378086"
	I1123 11:14:22.358990  717171 addons.go:239] Setting addon dashboard=true in "old-k8s-version-378086"
	W1123 11:14:22.358996  717171 addons.go:248] addon dashboard should already be in state true
	I1123 11:14:22.359019  717171 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:14:22.359532  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.358964  717171 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-378086"
	W1123 11:14:22.359735  717171 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:14:22.359767  717171 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:14:22.360190  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.360565  717171 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-378086"
	I1123 11:14:22.360590  717171 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-378086"
	I1123 11:14:22.360874  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.364461  717171 out.go:179] * Verifying Kubernetes components...
	I1123 11:14:22.371842  717171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:14:22.419239  717171 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-378086"
	W1123 11:14:22.419261  717171 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:14:22.419286  717171 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:14:22.419734  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.419924  717171 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:14:22.420097  717171 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:14:22.422983  717171 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:14:22.423008  717171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:14:22.423075  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:22.428912  717171 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:14:22.432839  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:14:22.432869  717171 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:14:22.432954  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:22.465560  717171 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:14:22.465583  717171 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:14:22.465651  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:22.505724  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:22.526382  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:22.534676  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:22.772082  717171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:14:22.776306  717171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:14:22.818398  717171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:14:22.840882  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:14:22.840956  717171 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:14:22.845095  717171 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-378086" to be "Ready" ...
	I1123 11:14:22.925650  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:14:22.925718  717171 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:14:23.009185  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:14:23.009267  717171 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:14:23.098280  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:14:23.098355  717171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:14:23.138509  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:14:23.138589  717171 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:14:23.163279  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:14:23.163356  717171 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:14:23.184975  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:14:23.185049  717171 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:14:23.209043  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:14:23.209122  717171 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:14:23.230256  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:14:23.230328  717171 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:14:23.257693  717171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:14:26.921923  717171 node_ready.go:49] node "old-k8s-version-378086" is "Ready"
	I1123 11:14:26.921950  717171 node_ready.go:38] duration metric: took 4.076783851s for node "old-k8s-version-378086" to be "Ready" ...
	I1123 11:14:26.921963  717171 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:14:26.922021  717171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:14:29.011464  717171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.235124828s)
	I1123 11:14:29.011605  717171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.193141208s)
	I1123 11:14:29.739077  717171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.48127965s)
	I1123 11:14:29.739295  717171 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.817262301s)
	I1123 11:14:29.739345  717171 api_server.go:72] duration metric: took 7.38105807s to wait for apiserver process to appear ...
	I1123 11:14:29.739366  717171 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:14:29.739420  717171 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 11:14:29.742228  717171 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-378086 addons enable metrics-server
	
	I1123 11:14:29.745398  717171 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 11:14:29.749369  717171 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 11:14:29.749932  717171 addons.go:530] duration metric: took 7.391282357s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 11:14:29.751322  717171 api_server.go:141] control plane version: v1.28.0
	I1123 11:14:29.751344  717171 api_server.go:131] duration metric: took 11.936439ms to wait for apiserver health ...
	I1123 11:14:29.751353  717171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:14:29.756001  717171 system_pods.go:59] 8 kube-system pods found
	I1123 11:14:29.756085  717171 system_pods.go:61] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:14:29.756111  717171 system_pods.go:61] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:14:29.756160  717171 system_pods.go:61] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:14:29.756188  717171 system_pods.go:61] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:14:29.756213  717171 system_pods.go:61] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:14:29.756246  717171 system_pods.go:61] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:14:29.756270  717171 system_pods.go:61] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:14:29.756304  717171 system_pods.go:61] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Running
	I1123 11:14:29.756336  717171 system_pods.go:74] duration metric: took 4.976027ms to wait for pod list to return data ...
	I1123 11:14:29.756362  717171 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:14:29.760029  717171 default_sa.go:45] found service account: "default"
	I1123 11:14:29.760087  717171 default_sa.go:55] duration metric: took 3.706301ms for default service account to be created ...
	I1123 11:14:29.760124  717171 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:14:29.765061  717171 system_pods.go:86] 8 kube-system pods found
	I1123 11:14:29.765140  717171 system_pods.go:89] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:14:29.765180  717171 system_pods.go:89] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:14:29.765207  717171 system_pods.go:89] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:14:29.765233  717171 system_pods.go:89] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:14:29.765267  717171 system_pods.go:89] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:14:29.765291  717171 system_pods.go:89] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:14:29.765313  717171 system_pods.go:89] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:14:29.765349  717171 system_pods.go:89] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Running
	I1123 11:14:29.765379  717171 system_pods.go:126] duration metric: took 5.230537ms to wait for k8s-apps to be running ...
	I1123 11:14:29.765402  717171 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:14:29.765534  717171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:14:29.789905  717171 system_svc.go:56] duration metric: took 24.486179ms WaitForService to wait for kubelet
	I1123 11:14:29.789938  717171 kubeadm.go:587] duration metric: took 7.431649375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:14:29.789968  717171 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:14:29.795285  717171 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:14:29.795318  717171 node_conditions.go:123] node cpu capacity is 2
	I1123 11:14:29.795338  717171 node_conditions.go:105] duration metric: took 5.364922ms to run NodePressure ...
	I1123 11:14:29.795352  717171 start.go:242] waiting for startup goroutines ...
	I1123 11:14:29.795370  717171 start.go:247] waiting for cluster config update ...
	I1123 11:14:29.795423  717171 start.go:256] writing updated cluster config ...
	I1123 11:14:29.795747  717171 ssh_runner.go:195] Run: rm -f paused
	I1123 11:14:29.806590  717171 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:14:29.814975  717171 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lr4ln" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 11:14:31.820842  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:33.821847  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:36.320906  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:38.822360  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:41.321511  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:43.325638  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:45.856517  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:48.321823  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:50.322271  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:52.820349  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:54.820941  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:56.821437  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:58.830346  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:15:01.321099  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:15:03.820655  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:15:06.321531  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	I1123 11:15:07.820146  717171 pod_ready.go:94] pod "coredns-5dd5756b68-lr4ln" is "Ready"
	I1123 11:15:07.820172  717171 pod_ready.go:86] duration metric: took 38.005158758s for pod "coredns-5dd5756b68-lr4ln" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.823003  717171 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.827372  717171 pod_ready.go:94] pod "etcd-old-k8s-version-378086" is "Ready"
	I1123 11:15:07.827398  717171 pod_ready.go:86] duration metric: took 4.369013ms for pod "etcd-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.830278  717171 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.834817  717171 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-378086" is "Ready"
	I1123 11:15:07.834839  717171 pod_ready.go:86] duration metric: took 4.534849ms for pod "kube-apiserver-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.837913  717171 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.018935  717171 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-378086" is "Ready"
	I1123 11:15:08.019020  717171 pod_ready.go:86] duration metric: took 181.079299ms for pod "kube-controller-manager-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.218966  717171 pod_ready.go:83] waiting for pod "kube-proxy-p546f" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.618866  717171 pod_ready.go:94] pod "kube-proxy-p546f" is "Ready"
	I1123 11:15:08.618894  717171 pod_ready.go:86] duration metric: took 399.901387ms for pod "kube-proxy-p546f" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.819167  717171 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:09.218626  717171 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-378086" is "Ready"
	I1123 11:15:09.218653  717171 pod_ready.go:86] duration metric: took 399.454055ms for pod "kube-scheduler-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:09.218666  717171 pod_ready.go:40] duration metric: took 39.412037952s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:15:09.271584  717171 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 11:15:09.274858  717171 out.go:203] 
	W1123 11:15:09.277846  717171 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 11:15:09.280834  717171 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 11:15:09.283826  717171 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-378086" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.681611207Z" level=info msg="Created container 070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9/dashboard-metrics-scraper" id=0bf1f35b-becb-4577-9ad1-5662671619cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.683197507Z" level=info msg="Starting container: 070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1" id=539f0257-976c-4b0d-b4d0-89c1dedaea04 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.687667214Z" level=info msg="Started container" PID=1703 containerID=070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9/dashboard-metrics-scraper id=539f0257-976c-4b0d-b4d0-89c1dedaea04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa
	Nov 23 11:15:05 old-k8s-version-378086 conmon[1701]: conmon 070e088d6ab1bb07083d <ninfo>: container 1703 exited with status 1
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.837361641Z" level=info msg="Removing container: 29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead" id=b4304591-bf2e-4b21-825d-7a2305e1860f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.846551258Z" level=info msg="Error loading conmon cgroup of container 29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead: cgroup deleted" id=b4304591-bf2e-4b21-825d-7a2305e1860f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.852871088Z" level=info msg="Removed container 29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9/dashboard-metrics-scraper" id=b4304591-bf2e-4b21-825d-7a2305e1860f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.562511605Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.566763062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.566797171Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.566821786Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.570816616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.570847591Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.570868055Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.57409033Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.574123667Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.574145444Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.577170572Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.577202539Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.577223315Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.580266043Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.580297929Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.580337708Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.583266769Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.583297858Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	070e088d6ab1b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   a4568087e122d       dashboard-metrics-scraper-5f989dc9cf-4pwv9       kubernetes-dashboard
	72eefe4998ad9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   1ebed9b70c4d3       storage-provisioner                              kube-system
	4c5050d05088c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   b2e2e1f69e355       kubernetes-dashboard-8694d4445c-p96px            kubernetes-dashboard
	3c42b93742133       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   befa320a04bab       coredns-5dd5756b68-lr4ln                         kube-system
	df6da468794be       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   5bec31b78078f       kube-proxy-p546f                                 kube-system
	6088631d886d3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   bed4dfcfe7471       busybox                                          default
	6f712ec8b4c0c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   1ebed9b70c4d3       storage-provisioner                              kube-system
	41652b7068202       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   e6075ae2c4492       kindnet-99vxv                                    kube-system
	e72df448160ac       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   9fee1691d89ed       etcd-old-k8s-version-378086                      kube-system
	8d4aa54773f5a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   43aeee5e36080       kube-controller-manager-old-k8s-version-378086   kube-system
	6ec5ddca657b6       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   652f459ccf2b9       kube-scheduler-old-k8s-version-378086            kube-system
	0dbe5418b22cb       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   84482de019341       kube-apiserver-old-k8s-version-378086            kube-system
	
	
	==> coredns [3c42b937421338466200e60e96d69686288069898351e5d8bd5f9d3a6dcfe764] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50622 - 65434 "HINFO IN 2480388945125585251.5524607998298780146. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036321143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-378086
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-378086
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-378086
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_13_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:13:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-378086
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:15:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-378086
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                4336eb7a-3e7c-4f09-a2a9-ee819430f43e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-lr4ln                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-378086                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-99vxv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-378086             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-378086    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-p546f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-378086             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4pwv9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-p96px             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m13s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m13s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m13s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-378086 event: Registered Node old-k8s-version-378086 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-378086 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-378086 event: Registered Node old-k8s-version-378086 in Controller
	
	
	==> dmesg <==
	[Nov23 10:53] overlayfs: idmapped layers are currently not supported
	[Nov23 10:54] overlayfs: idmapped layers are currently not supported
	[Nov23 10:55] overlayfs: idmapped layers are currently not supported
	[Nov23 10:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e72df448160ac085b2167283e8c8a22496db5a4654f14b4aee7f1b6b959124f9] <==
	{"level":"info","ts":"2025-11-23T11:14:22.646734Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T11:14:22.64688Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T11:14:22.650279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T11:14:22.650508Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T11:14:22.656911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:14:22.656968Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:14:22.675872Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T11:14:22.693852Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T11:14:22.697582Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T11:14:22.710957Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T11:14:22.711014Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T11:14:24.001207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T11:14:24.001374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T11:14:24.001463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T11:14:24.001517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.001551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.001606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.00164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.005324Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-378086 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T11:14:24.005593Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T11:14:24.005744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T11:14:24.005855Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T11:14:24.00681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T11:14:24.009483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T11:14:24.01051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 11:15:24 up  3:57,  0 user,  load average: 1.80, 3.09, 2.69
	Linux old-k8s-version-378086 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [41652b70682024357c15f7e082dfdfdb23f995e78049b69dcbd577a6cfe04c4a] <==
	I1123 11:14:28.360566       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:14:28.374201       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:14:28.374343       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:14:28.374355       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:14:28.374371       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:14:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:14:28.561781       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:14:28.561799       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:14:28.561810       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:14:28.562099       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:14:58.561395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:14:58.562404       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:14:58.562422       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:14:58.562504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:15:00.462065       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:15:00.462099       1 metrics.go:72] Registering metrics
	I1123 11:15:00.462180       1 controller.go:711] "Syncing nftables rules"
	I1123 11:15:08.561608       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:15:08.561675       1 main.go:301] handling current node
	I1123 11:15:18.561378       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:15:18.561445       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae] <==
	I1123 11:14:27.052854       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 11:14:27.053158       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 11:14:27.053181       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 11:14:27.053714       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 11:14:27.053778       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 11:14:27.054721       1 aggregator.go:166] initial CRD sync complete...
	I1123 11:14:27.054744       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 11:14:27.054751       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:14:27.085641       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 11:14:27.093965       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:14:27.139729       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 11:14:27.140866       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:14:27.158128       1 cache.go:39] Caches are synced for autoregister controller
	E1123 11:14:27.158490       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:14:27.560165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:14:29.492344       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 11:14:29.591046       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 11:14:29.620863       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:14:29.634402       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:14:29.647885       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 11:14:29.709445       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.126.65"}
	I1123 11:14:29.731124       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.140.149"}
	I1123 11:14:39.954798       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 11:14:40.058092       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 11:14:40.163618       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679] <==
	I1123 11:14:40.047937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.086µs"
	I1123 11:14:40.066171       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1123 11:14:40.073150       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1123 11:14:40.091805       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4pwv9"
	I1123 11:14:40.091841       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-p96px"
	I1123 11:14:40.120805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.599317ms"
	I1123 11:14:40.121853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.380058ms"
	I1123 11:14:40.137350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.483046ms"
	I1123 11:14:40.137475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.691µs"
	I1123 11:14:40.144327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.262µs"
	I1123 11:14:40.150046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.134272ms"
	I1123 11:14:40.150797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.151µs"
	I1123 11:14:40.175408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.522µs"
	I1123 11:14:40.223525       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 11:14:40.246131       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 11:14:40.246165       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 11:14:44.783419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.405µs"
	I1123 11:14:45.858902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="213.098µs"
	I1123 11:14:46.807125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.082µs"
	I1123 11:14:49.837574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.446765ms"
	I1123 11:14:49.838029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="131.522µs"
	I1123 11:15:05.855211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.111µs"
	I1123 11:15:07.756025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.878468ms"
	I1123 11:15:07.756123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.427µs"
	I1123 11:15:10.420734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.285µs"
	
	
	==> kube-proxy [df6da468794be21cefbc6cb802bef7733829bfed7b575a64f34d2e62f4b2d0db] <==
	I1123 11:14:29.139661       1 server_others.go:69] "Using iptables proxy"
	I1123 11:14:29.159650       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 11:14:29.308757       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:14:29.310768       1 server_others.go:152] "Using iptables Proxier"
	I1123 11:14:29.310811       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 11:14:29.310831       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 11:14:29.310864       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 11:14:29.311097       1 server.go:846] "Version info" version="v1.28.0"
	I1123 11:14:29.311113       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:14:29.314201       1 config.go:188] "Starting service config controller"
	I1123 11:14:29.314240       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 11:14:29.314260       1 config.go:97] "Starting endpoint slice config controller"
	I1123 11:14:29.314264       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 11:14:29.314754       1 config.go:315] "Starting node config controller"
	I1123 11:14:29.314772       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 11:14:29.454819       1 shared_informer.go:318] Caches are synced for service config
	I1123 11:14:29.454881       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 11:14:29.515795       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf] <==
	I1123 11:14:26.477759       1 serving.go:348] Generated self-signed cert in-memory
	I1123 11:14:29.511055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 11:14:29.511092       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:14:29.519312       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 11:14:29.519406       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1123 11:14:29.519418       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1123 11:14:29.519436       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 11:14:29.521128       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:14:29.521157       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 11:14:29.521175       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:14:29.521180       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 11:14:29.619744       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1123 11:14:29.622155       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 11:14:29.622239       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: I1123 11:14:40.241399     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cd03be4f-d3cf-411d-81a7-5042463abcd6-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4pwv9\" (UID: \"cd03be4f-d3cf-411d-81a7-5042463abcd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9"
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: I1123 11:14:40.241462     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwdg\" (UniqueName: \"kubernetes.io/projected/2df082a1-1ad6-44e1-8263-c77434c26762-kube-api-access-2mwdg\") pod \"kubernetes-dashboard-8694d4445c-p96px\" (UID: \"2df082a1-1ad6-44e1-8263-c77434c26762\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p96px"
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: I1123 11:14:40.241492     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmqr\" (UniqueName: \"kubernetes.io/projected/cd03be4f-d3cf-411d-81a7-5042463abcd6-kube-api-access-rjmqr\") pod \"dashboard-metrics-scraper-5f989dc9cf-4pwv9\" (UID: \"cd03be4f-d3cf-411d-81a7-5042463abcd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9"
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: W1123 11:14:40.431770     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa WatchSource:0}: Error finding container a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa: Status 404 returned error can't find the container with id a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: W1123 11:14:40.444811     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-b2e2e1f69e355f0d36868561dc094a2616923b68069da9c2001f0c0e8f59dc64 WatchSource:0}: Error finding container b2e2e1f69e355f0d36868561dc094a2616923b68069da9c2001f0c0e8f59dc64: Status 404 returned error can't find the container with id b2e2e1f69e355f0d36868561dc094a2616923b68069da9c2001f0c0e8f59dc64
	Nov 23 11:14:44 old-k8s-version-378086 kubelet[787]: I1123 11:14:44.768678     787 scope.go:117] "RemoveContainer" containerID="23579ad4ed799e148f0206277c2ba17a85c4ebfc6fd76ef84b36bc714f8f9e05"
	Nov 23 11:14:45 old-k8s-version-378086 kubelet[787]: I1123 11:14:45.775636     787 scope.go:117] "RemoveContainer" containerID="23579ad4ed799e148f0206277c2ba17a85c4ebfc6fd76ef84b36bc714f8f9e05"
	Nov 23 11:14:45 old-k8s-version-378086 kubelet[787]: I1123 11:14:45.776603     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:14:45 old-k8s-version-378086 kubelet[787]: E1123 11:14:45.776922     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:14:46 old-k8s-version-378086 kubelet[787]: I1123 11:14:46.780707     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:14:46 old-k8s-version-378086 kubelet[787]: E1123 11:14:46.781434     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:14:50 old-k8s-version-378086 kubelet[787]: I1123 11:14:50.405667     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:14:50 old-k8s-version-378086 kubelet[787]: E1123 11:14:50.406226     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:14:58 old-k8s-version-378086 kubelet[787]: I1123 11:14:58.813878     787 scope.go:117] "RemoveContainer" containerID="6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada"
	Nov 23 11:14:58 old-k8s-version-378086 kubelet[787]: I1123 11:14:58.845604     787 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p96px" podStartSLOduration=9.91895456 podCreationTimestamp="2025-11-23 11:14:40 +0000 UTC" firstStartedPulling="2025-11-23 11:14:40.447878415 +0000 UTC m=+19.014881780" lastFinishedPulling="2025-11-23 11:14:49.37239109 +0000 UTC m=+27.939394464" observedRunningTime="2025-11-23 11:14:49.812825542 +0000 UTC m=+28.379828916" watchObservedRunningTime="2025-11-23 11:14:58.843467244 +0000 UTC m=+37.410470610"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: I1123 11:15:05.653901     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: I1123 11:15:05.834672     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: I1123 11:15:05.835010     787 scope.go:117] "RemoveContainer" containerID="070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: E1123 11:15:05.835363     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:15:10 old-k8s-version-378086 kubelet[787]: I1123 11:15:10.404717     787 scope.go:117] "RemoveContainer" containerID="070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	Nov 23 11:15:10 old-k8s-version-378086 kubelet[787]: E1123 11:15:10.405097     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:15:21 old-k8s-version-378086 kubelet[787]: I1123 11:15:21.565914     787 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 11:15:21 old-k8s-version-378086 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:15:21 old-k8s-version-378086 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:15:21 old-k8s-version-378086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4c5050d05088c8d4aa155ed1ef8c68b82a7e47e3df5aea08651a337b5ecd164f] <==
	2025/11/23 11:14:49 Starting overwatch
	2025/11/23 11:14:49 Using namespace: kubernetes-dashboard
	2025/11/23 11:14:49 Using in-cluster config to connect to apiserver
	2025/11/23 11:14:49 Using secret token for csrf signing
	2025/11/23 11:14:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:14:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:14:49 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 11:14:49 Generating JWE encryption key
	2025/11/23 11:14:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:14:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:14:50 Initializing JWE encryption key from synchronized object
	2025/11/23 11:14:50 Creating in-cluster Sidecar client
	2025/11/23 11:14:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:14:50 Serving insecurely on HTTP port: 9090
	2025/11/23 11:15:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada] <==
	I1123 11:14:28.654093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:14:58.656746       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [72eefe4998ad926e91ba0b4aeaa70f2824e1d1d4509369827c4a7c5dda6c05e4] <==
	I1123 11:14:58.875527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:14:58.889249       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:14:58.889300       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 11:15:16.285773       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:15:16.285943       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-378086_d365c105-0a59-4ab4-82c0-06aff9e1c616!
	I1123 11:15:16.286860       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96c35a90-0779-45d9-8ae6-4ff1ea7116b2", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-378086_d365c105-0a59-4ab4-82c0-06aff9e1c616 became leader
	I1123 11:15:16.386522       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-378086_d365c105-0a59-4ab4-82c0-06aff9e1c616!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-378086 -n old-k8s-version-378086
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-378086 -n old-k8s-version-378086: exit status 2 (403.295852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-378086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-378086
helpers_test.go:243: (dbg) docker inspect old-k8s-version-378086:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388",
	        "Created": "2025-11-23T11:12:54.956037881Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:14:14.48359006Z",
	            "FinishedAt": "2025-11-23T11:14:13.641933321Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/hostname",
	        "HostsPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/hosts",
	        "LogPath": "/var/lib/docker/containers/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388-json.log",
	        "Name": "/old-k8s-version-378086",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378086:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378086",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388",
	                "LowerDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/673f5db1d4070abaea3990804d5506db3486d53aad8d1c3cb72c5ce26c2592bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378086",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378086/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378086",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378086",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378086",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d1ae3fb3ac157d181e4bd1ea430ee92bfbf1b1b7ce8fb3c080323cb391c39ac0",
	            "SandboxKey": "/var/run/docker/netns/d1ae3fb3ac15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378086": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:85:9a:42:9c:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad991492cc1b5405599bff7adffac92b2e633269fafa0d884a2cf0b41e4105f6",
	                    "EndpointID": "5a985961deac3a21499a26ac6888b34c32e4515e1fe15f2a406486bce260a115",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-378086",
	                        "c67933f5eb0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086: exit status 2 (383.330376ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-378086 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-378086 logs -n 25: (1.728414046s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-344709 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo containerd config dump                                                                                                                                                                                                  │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ ssh     │ -p cilium-344709 sudo crio config                                                                                                                                                                                                             │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p cilium-344709                                                                                                                                                                                                                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ pause   │ -p pause-851396 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p pause-851396                                                                                                                                                                                                                               │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p force-systemd-env-613417                                                                                                                                                                                                                   │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p cert-options-700578 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ cert-options-700578 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ -p cert-options-700578 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p cert-options-700578                                                                                                                                                                                                                        │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:13 UTC │                     │
	│ stop    │ -p old-k8s-version-378086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-378086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:14:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:14:14.192039  717171 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:14:14.192213  717171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:14:14.192225  717171 out.go:374] Setting ErrFile to fd 2...
	I1123 11:14:14.192231  717171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:14:14.192463  717171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:14:14.192839  717171 out.go:368] Setting JSON to false
	I1123 11:14:14.193823  717171 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14203,"bootTime":1763882251,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:14:14.193935  717171 start.go:143] virtualization:  
	I1123 11:14:14.196931  717171 out.go:179] * [old-k8s-version-378086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:14:14.200953  717171 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:14:14.201109  717171 notify.go:221] Checking for updates...
	I1123 11:14:14.206892  717171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:14:14.209895  717171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:14:14.212742  717171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:14:14.215522  717171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:14:14.218407  717171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:14:14.221889  717171 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:14:14.225285  717171 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 11:14:14.228364  717171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:14:14.253161  717171 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:14:14.253272  717171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:14:14.319685  717171 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:14:14.308823646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:14:14.319782  717171 docker.go:319] overlay module found
	I1123 11:14:14.322855  717171 out.go:179] * Using the docker driver based on existing profile
	I1123 11:14:14.325666  717171 start.go:309] selected driver: docker
	I1123 11:14:14.325687  717171 start.go:927] validating driver "docker" against &{Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:14:14.325783  717171 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:14:14.326542  717171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:14:14.387979  717171 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:14:14.378784907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:14:14.388383  717171 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:14:14.388421  717171 cni.go:84] Creating CNI manager for ""
	I1123 11:14:14.388487  717171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:14:14.388532  717171 start.go:353] cluster config:
	{Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:14:14.393509  717171 out.go:179] * Starting "old-k8s-version-378086" primary control-plane node in "old-k8s-version-378086" cluster
	I1123 11:14:14.396282  717171 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:14:14.399106  717171 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:14:14.401882  717171 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 11:14:14.401938  717171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 11:14:14.401950  717171 cache.go:65] Caching tarball of preloaded images
	I1123 11:14:14.401952  717171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:14:14.402031  717171 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:14:14.402041  717171 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 11:14:14.402159  717171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/config.json ...
	I1123 11:14:14.421918  717171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:14:14.421941  717171 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:14:14.421961  717171 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:14:14.421992  717171 start.go:360] acquireMachinesLock for old-k8s-version-378086: {Name:mkfc344308e200b270c60104d70fe97a5903afde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:14:14.422060  717171 start.go:364] duration metric: took 45.777µs to acquireMachinesLock for "old-k8s-version-378086"
	I1123 11:14:14.422084  717171 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:14:14.422093  717171 fix.go:54] fixHost starting: 
	I1123 11:14:14.422349  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:14.439252  717171 fix.go:112] recreateIfNeeded on old-k8s-version-378086: state=Stopped err=<nil>
	W1123 11:14:14.439295  717171 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 11:14:14.442453  717171 out.go:252] * Restarting existing docker container for "old-k8s-version-378086" ...
	I1123 11:14:14.442536  717171 cli_runner.go:164] Run: docker start old-k8s-version-378086
	I1123 11:14:14.739502  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:14.763635  717171 kic.go:430] container "old-k8s-version-378086" state is running.
	I1123 11:14:14.764027  717171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:14:14.793348  717171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/config.json ...
	I1123 11:14:14.793675  717171 machine.go:94] provisionDockerMachine start ...
	I1123 11:14:14.793755  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:14.828089  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:14.828422  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:14.828432  717171 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:14:14.829125  717171 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:14:17.986148  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378086
	
	I1123 11:14:17.986170  717171 ubuntu.go:182] provisioning hostname "old-k8s-version-378086"
	I1123 11:14:17.986233  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.007679  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:18.007999  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:18.008016  717171 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-378086 && echo "old-k8s-version-378086" | sudo tee /etc/hostname
	I1123 11:14:18.171893  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378086
	
	I1123 11:14:18.172090  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.189370  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:18.189718  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:18.189740  717171 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-378086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-378086/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-378086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:14:18.341919  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:14:18.341944  717171 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:14:18.342003  717171 ubuntu.go:190] setting up certificates
	I1123 11:14:18.342018  717171 provision.go:84] configureAuth start
	I1123 11:14:18.342102  717171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:14:18.359201  717171 provision.go:143] copyHostCerts
	I1123 11:14:18.359280  717171 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:14:18.359307  717171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:14:18.359387  717171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:14:18.359494  717171 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:14:18.359505  717171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:14:18.359533  717171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:14:18.359599  717171 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:14:18.359609  717171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:14:18.359638  717171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:14:18.359697  717171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-378086 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-378086]
	I1123 11:14:18.741468  717171 provision.go:177] copyRemoteCerts
	I1123 11:14:18.741553  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:14:18.741600  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.759037  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:18.869171  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:14:18.888574  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 11:14:18.906853  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:14:18.924105  717171 provision.go:87] duration metric: took 582.057132ms to configureAuth
	I1123 11:14:18.924137  717171 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:14:18.924336  717171 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:14:18.924441  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:18.944603  717171 main.go:143] libmachine: Using SSH client type: native
	I1123 11:14:18.944942  717171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1123 11:14:18.944969  717171 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:14:19.313049  717171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:14:19.313070  717171 machine.go:97] duration metric: took 4.519376787s to provisionDockerMachine
	I1123 11:14:19.313081  717171 start.go:293] postStartSetup for "old-k8s-version-378086" (driver="docker")
	I1123 11:14:19.313091  717171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:14:19.313147  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:14:19.313186  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.337006  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.445586  717171 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:14:19.449316  717171 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:14:19.449345  717171 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:14:19.449358  717171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:14:19.449444  717171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:14:19.449546  717171 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:14:19.449651  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:14:19.457145  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:14:19.475176  717171 start.go:296] duration metric: took 162.072108ms for postStartSetup
	I1123 11:14:19.475301  717171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:14:19.475377  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.493564  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.594909  717171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:14:19.600150  717171 fix.go:56] duration metric: took 5.178050324s for fixHost
	I1123 11:14:19.600176  717171 start.go:83] releasing machines lock for "old-k8s-version-378086", held for 5.178103593s
	I1123 11:14:19.600244  717171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378086
	I1123 11:14:19.619447  717171 ssh_runner.go:195] Run: cat /version.json
	I1123 11:14:19.619506  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.619773  717171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:14:19.619839  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:19.638951  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.645573  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:19.749113  717171 ssh_runner.go:195] Run: systemctl --version
	I1123 11:14:19.852827  717171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:14:19.888260  717171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:14:19.893510  717171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:14:19.893587  717171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:14:19.901256  717171 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:14:19.901322  717171 start.go:496] detecting cgroup driver to use...
	I1123 11:14:19.901359  717171 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:14:19.901452  717171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:14:19.916626  717171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:14:19.929717  717171 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:14:19.929826  717171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:14:19.945499  717171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:14:19.958804  717171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:14:20.089984  717171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:14:20.218270  717171 docker.go:234] disabling docker service ...
	I1123 11:14:20.218337  717171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:14:20.234035  717171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:14:20.250390  717171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:14:20.399023  717171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:14:20.520962  717171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:14:20.533836  717171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:14:20.548229  717171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 11:14:20.548348  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.557603  717171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:14:20.557723  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.567225  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.576188  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.585560  717171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:14:20.593868  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.603143  717171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.612353  717171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:14:20.621787  717171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:14:20.629447  717171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:14:20.636902  717171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:14:20.754045  717171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:14:20.945565  717171 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:14:20.945634  717171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:14:20.949487  717171 start.go:564] Will wait 60s for crictl version
	I1123 11:14:20.949603  717171 ssh_runner.go:195] Run: which crictl
	I1123 11:14:20.953116  717171 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:14:20.978958  717171 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:14:20.979042  717171 ssh_runner.go:195] Run: crio --version
	I1123 11:14:21.020512  717171 ssh_runner.go:195] Run: crio --version
	I1123 11:14:21.055441  717171 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 11:14:21.058344  717171 cli_runner.go:164] Run: docker network inspect old-k8s-version-378086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:14:21.074917  717171 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:14:21.078950  717171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:14:21.088409  717171 kubeadm.go:884] updating cluster {Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:14:21.088545  717171 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 11:14:21.088601  717171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:14:21.127975  717171 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:14:21.128046  717171 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:14:21.128132  717171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:14:21.158064  717171 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:14:21.158089  717171 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:14:21.158098  717171 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1123 11:14:21.158201  717171 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-378086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:14:21.158287  717171 ssh_runner.go:195] Run: crio config
	I1123 11:14:21.229596  717171 cni.go:84] Creating CNI manager for ""
	I1123 11:14:21.229617  717171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:14:21.229642  717171 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:14:21.229666  717171 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-378086 NodeName:old-k8s-version-378086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:14:21.229810  717171 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-378086"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:14:21.229888  717171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 11:14:21.238703  717171 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:14:21.238779  717171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:14:21.246753  717171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 11:14:21.260079  717171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:14:21.277691  717171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 11:14:21.290417  717171 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:14:21.294222  717171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:14:21.303902  717171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:14:21.414122  717171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:14:21.431448  717171 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086 for IP: 192.168.85.2
	I1123 11:14:21.431470  717171 certs.go:195] generating shared ca certs ...
	I1123 11:14:21.431486  717171 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:21.431696  717171 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:14:21.431771  717171 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:14:21.431785  717171 certs.go:257] generating profile certs ...
	I1123 11:14:21.431907  717171 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.key
	I1123 11:14:21.432001  717171 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key.0966a661
	I1123 11:14:21.432083  717171 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key
	I1123 11:14:21.432219  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:14:21.432272  717171 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:14:21.432288  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:14:21.432333  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:14:21.432382  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:14:21.432415  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:14:21.432480  717171 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:14:21.433133  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:14:21.458844  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:14:21.480229  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:14:21.503073  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:14:21.524911  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 11:14:21.545290  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:14:21.564242  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:14:21.587154  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:14:21.621040  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:14:21.643136  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:14:21.667542  717171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:14:21.699052  717171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:14:21.712945  717171 ssh_runner.go:195] Run: openssl version
	I1123 11:14:21.719250  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:14:21.728469  717171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:14:21.732205  717171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:14:21.732322  717171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:14:21.773958  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:14:21.782960  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:14:21.791229  717171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:14:21.795413  717171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:14:21.795524  717171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:14:21.839437  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:14:21.847359  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:14:21.855935  717171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:14:21.859799  717171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:14:21.859901  717171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:14:21.901510  717171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:14:21.909566  717171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:14:21.913328  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:14:21.955087  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:14:21.996185  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:14:22.038351  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:14:22.087239  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:14:22.133136  717171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:14:22.179750  717171 kubeadm.go:401] StartCluster: {Name:old-k8s-version-378086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-378086 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:14:22.179895  717171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:14:22.179989  717171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:14:22.271040  717171 cri.go:89] found id: "8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679"
	I1123 11:14:22.271124  717171 cri.go:89] found id: "6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf"
	I1123 11:14:22.271145  717171 cri.go:89] found id: "0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae"
	I1123 11:14:22.271173  717171 cri.go:89] found id: ""
	I1123 11:14:22.271242  717171 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:14:22.302571  717171 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:14:22.302700  717171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:14:22.325849  717171 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:14:22.325922  717171 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:14:22.325997  717171 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:14:22.338870  717171 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:14:22.339542  717171 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-378086" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:14:22.339845  717171 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-378086" cluster setting kubeconfig missing "old-k8s-version-378086" context setting]
	I1123 11:14:22.340331  717171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:22.342056  717171 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:14:22.356680  717171 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 11:14:22.356756  717171 kubeadm.go:602] duration metric: took 30.814936ms to restartPrimaryControlPlane
	I1123 11:14:22.356791  717171 kubeadm.go:403] duration metric: took 177.040026ms to StartCluster
	I1123 11:14:22.356825  717171 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:22.356911  717171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:14:22.357977  717171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:14:22.358232  717171 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:14:22.358641  717171 config.go:182] Loaded profile config "old-k8s-version-378086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 11:14:22.358659  717171 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:14:22.358926  717171 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-378086"
	I1123 11:14:22.358978  717171 addons.go:70] Setting dashboard=true in profile "old-k8s-version-378086"
	I1123 11:14:22.358990  717171 addons.go:239] Setting addon dashboard=true in "old-k8s-version-378086"
	W1123 11:14:22.358996  717171 addons.go:248] addon dashboard should already be in state true
	I1123 11:14:22.359019  717171 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:14:22.359532  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.358964  717171 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-378086"
	W1123 11:14:22.359735  717171 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:14:22.359767  717171 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:14:22.360190  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.360565  717171 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-378086"
	I1123 11:14:22.360590  717171 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-378086"
	I1123 11:14:22.360874  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.364461  717171 out.go:179] * Verifying Kubernetes components...
	I1123 11:14:22.371842  717171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:14:22.419239  717171 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-378086"
	W1123 11:14:22.419261  717171 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:14:22.419286  717171 host.go:66] Checking if "old-k8s-version-378086" exists ...
	I1123 11:14:22.419734  717171 cli_runner.go:164] Run: docker container inspect old-k8s-version-378086 --format={{.State.Status}}
	I1123 11:14:22.419924  717171 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:14:22.420097  717171 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:14:22.422983  717171 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:14:22.423008  717171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:14:22.423075  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:22.428912  717171 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:14:22.432839  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:14:22.432869  717171 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:14:22.432954  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:22.465560  717171 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:14:22.465583  717171 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:14:22.465651  717171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378086
	I1123 11:14:22.505724  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:22.526382  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:22.534676  717171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/old-k8s-version-378086/id_rsa Username:docker}
	I1123 11:14:22.772082  717171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:14:22.776306  717171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:14:22.818398  717171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:14:22.840882  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:14:22.840956  717171 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:14:22.845095  717171 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-378086" to be "Ready" ...
	I1123 11:14:22.925650  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:14:22.925718  717171 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:14:23.009185  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:14:23.009267  717171 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:14:23.098280  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:14:23.098355  717171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:14:23.138509  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:14:23.138589  717171 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:14:23.163279  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:14:23.163356  717171 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:14:23.184975  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:14:23.185049  717171 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:14:23.209043  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:14:23.209122  717171 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:14:23.230256  717171 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:14:23.230328  717171 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:14:23.257693  717171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:14:26.921923  717171 node_ready.go:49] node "old-k8s-version-378086" is "Ready"
	I1123 11:14:26.921950  717171 node_ready.go:38] duration metric: took 4.076783851s for node "old-k8s-version-378086" to be "Ready" ...
	I1123 11:14:26.921963  717171 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:14:26.922021  717171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:14:29.011464  717171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.235124828s)
	I1123 11:14:29.011605  717171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.193141208s)
	I1123 11:14:29.739077  717171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.48127965s)
	I1123 11:14:29.739295  717171 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.817262301s)
	I1123 11:14:29.739345  717171 api_server.go:72] duration metric: took 7.38105807s to wait for apiserver process to appear ...
	I1123 11:14:29.739366  717171 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:14:29.739420  717171 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 11:14:29.742228  717171 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-378086 addons enable metrics-server
	
	I1123 11:14:29.745398  717171 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 11:14:29.749369  717171 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 11:14:29.749932  717171 addons.go:530] duration metric: took 7.391282357s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 11:14:29.751322  717171 api_server.go:141] control plane version: v1.28.0
	I1123 11:14:29.751344  717171 api_server.go:131] duration metric: took 11.936439ms to wait for apiserver health ...
	I1123 11:14:29.751353  717171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:14:29.756001  717171 system_pods.go:59] 8 kube-system pods found
	I1123 11:14:29.756085  717171 system_pods.go:61] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:14:29.756111  717171 system_pods.go:61] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:14:29.756160  717171 system_pods.go:61] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:14:29.756188  717171 system_pods.go:61] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:14:29.756213  717171 system_pods.go:61] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:14:29.756246  717171 system_pods.go:61] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:14:29.756270  717171 system_pods.go:61] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:14:29.756304  717171 system_pods.go:61] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Running
	I1123 11:14:29.756336  717171 system_pods.go:74] duration metric: took 4.976027ms to wait for pod list to return data ...
	I1123 11:14:29.756362  717171 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:14:29.760029  717171 default_sa.go:45] found service account: "default"
	I1123 11:14:29.760087  717171 default_sa.go:55] duration metric: took 3.706301ms for default service account to be created ...
	I1123 11:14:29.760124  717171 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:14:29.765061  717171 system_pods.go:86] 8 kube-system pods found
	I1123 11:14:29.765140  717171 system_pods.go:89] "coredns-5dd5756b68-lr4ln" [bb9ae516-3281-45af-9186-d257de3155f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:14:29.765180  717171 system_pods.go:89] "etcd-old-k8s-version-378086" [18586d34-bead-4fff-abaa-71fa87220d66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:14:29.765207  717171 system_pods.go:89] "kindnet-99vxv" [f7ac305c-9238-47e4-9fe9-101bcf9865f7] Running
	I1123 11:14:29.765233  717171 system_pods.go:89] "kube-apiserver-old-k8s-version-378086" [2bb8d4d2-ba88-438e-9ef5-ffaa0af29f3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:14:29.765267  717171 system_pods.go:89] "kube-controller-manager-old-k8s-version-378086" [9cdce432-69e2-4ad1-a2ef-aef764362a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:14:29.765291  717171 system_pods.go:89] "kube-proxy-p546f" [c0ebea1b-f874-4486-a261-3541f3db2d42] Running
	I1123 11:14:29.765313  717171 system_pods.go:89] "kube-scheduler-old-k8s-version-378086" [9661a3d9-a587-4799-97f2-d630d44973a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:14:29.765349  717171 system_pods.go:89] "storage-provisioner" [6c2b2474-9610-4bd7-9676-545cf9ec1767] Running
	I1123 11:14:29.765379  717171 system_pods.go:126] duration metric: took 5.230537ms to wait for k8s-apps to be running ...
	I1123 11:14:29.765402  717171 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:14:29.765534  717171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:14:29.789905  717171 system_svc.go:56] duration metric: took 24.486179ms WaitForService to wait for kubelet
	I1123 11:14:29.789938  717171 kubeadm.go:587] duration metric: took 7.431649375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:14:29.789968  717171 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:14:29.795285  717171 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:14:29.795318  717171 node_conditions.go:123] node cpu capacity is 2
	I1123 11:14:29.795338  717171 node_conditions.go:105] duration metric: took 5.364922ms to run NodePressure ...
	I1123 11:14:29.795352  717171 start.go:242] waiting for startup goroutines ...
	I1123 11:14:29.795370  717171 start.go:247] waiting for cluster config update ...
	I1123 11:14:29.795423  717171 start.go:256] writing updated cluster config ...
	I1123 11:14:29.795747  717171 ssh_runner.go:195] Run: rm -f paused
	I1123 11:14:29.806590  717171 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:14:29.814975  717171 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lr4ln" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 11:14:31.820842  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:33.821847  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:36.320906  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:38.822360  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:41.321511  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:43.325638  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:45.856517  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:48.321823  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:50.322271  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:52.820349  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:54.820941  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:56.821437  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:14:58.830346  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:15:01.321099  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:15:03.820655  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	W1123 11:15:06.321531  717171 pod_ready.go:104] pod "coredns-5dd5756b68-lr4ln" is not "Ready", error: <nil>
	I1123 11:15:07.820146  717171 pod_ready.go:94] pod "coredns-5dd5756b68-lr4ln" is "Ready"
	I1123 11:15:07.820172  717171 pod_ready.go:86] duration metric: took 38.005158758s for pod "coredns-5dd5756b68-lr4ln" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.823003  717171 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.827372  717171 pod_ready.go:94] pod "etcd-old-k8s-version-378086" is "Ready"
	I1123 11:15:07.827398  717171 pod_ready.go:86] duration metric: took 4.369013ms for pod "etcd-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.830278  717171 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.834817  717171 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-378086" is "Ready"
	I1123 11:15:07.834839  717171 pod_ready.go:86] duration metric: took 4.534849ms for pod "kube-apiserver-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:07.837913  717171 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.018935  717171 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-378086" is "Ready"
	I1123 11:15:08.019020  717171 pod_ready.go:86] duration metric: took 181.079299ms for pod "kube-controller-manager-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.218966  717171 pod_ready.go:83] waiting for pod "kube-proxy-p546f" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.618866  717171 pod_ready.go:94] pod "kube-proxy-p546f" is "Ready"
	I1123 11:15:08.618894  717171 pod_ready.go:86] duration metric: took 399.901387ms for pod "kube-proxy-p546f" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:08.819167  717171 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:09.218626  717171 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-378086" is "Ready"
	I1123 11:15:09.218653  717171 pod_ready.go:86] duration metric: took 399.454055ms for pod "kube-scheduler-old-k8s-version-378086" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:15:09.218666  717171 pod_ready.go:40] duration metric: took 39.412037952s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:15:09.271584  717171 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 11:15:09.274858  717171 out.go:203] 
	W1123 11:15:09.277846  717171 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 11:15:09.280834  717171 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 11:15:09.283826  717171 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-378086" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.681611207Z" level=info msg="Created container 070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9/dashboard-metrics-scraper" id=0bf1f35b-becb-4577-9ad1-5662671619cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.683197507Z" level=info msg="Starting container: 070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1" id=539f0257-976c-4b0d-b4d0-89c1dedaea04 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.687667214Z" level=info msg="Started container" PID=1703 containerID=070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9/dashboard-metrics-scraper id=539f0257-976c-4b0d-b4d0-89c1dedaea04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa
	Nov 23 11:15:05 old-k8s-version-378086 conmon[1701]: conmon 070e088d6ab1bb07083d <ninfo>: container 1703 exited with status 1
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.837361641Z" level=info msg="Removing container: 29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead" id=b4304591-bf2e-4b21-825d-7a2305e1860f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.846551258Z" level=info msg="Error loading conmon cgroup of container 29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead: cgroup deleted" id=b4304591-bf2e-4b21-825d-7a2305e1860f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:15:05 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:05.852871088Z" level=info msg="Removed container 29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9/dashboard-metrics-scraper" id=b4304591-bf2e-4b21-825d-7a2305e1860f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.562511605Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.566763062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.566797171Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.566821786Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.570816616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.570847591Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.570868055Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.57409033Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.574123667Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.574145444Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.577170572Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.577202539Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.577223315Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.580266043Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.580297929Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.580337708Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.583266769Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:15:08 old-k8s-version-378086 crio[655]: time="2025-11-23T11:15:08.583297858Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	070e088d6ab1b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   a4568087e122d       dashboard-metrics-scraper-5f989dc9cf-4pwv9       kubernetes-dashboard
	72eefe4998ad9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   1ebed9b70c4d3       storage-provisioner                              kube-system
	4c5050d05088c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   b2e2e1f69e355       kubernetes-dashboard-8694d4445c-p96px            kubernetes-dashboard
	3c42b93742133       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           58 seconds ago       Running             coredns                     1                   befa320a04bab       coredns-5dd5756b68-lr4ln                         kube-system
	df6da468794be       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           58 seconds ago       Running             kube-proxy                  1                   5bec31b78078f       kube-proxy-p546f                                 kube-system
	6088631d886d3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   bed4dfcfe7471       busybox                                          default
	6f712ec8b4c0c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   1ebed9b70c4d3       storage-provisioner                              kube-system
	41652b7068202       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   e6075ae2c4492       kindnet-99vxv                                    kube-system
	e72df448160ac       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   9fee1691d89ed       etcd-old-k8s-version-378086                      kube-system
	8d4aa54773f5a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   43aeee5e36080       kube-controller-manager-old-k8s-version-378086   kube-system
	6ec5ddca657b6       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   652f459ccf2b9       kube-scheduler-old-k8s-version-378086            kube-system
	0dbe5418b22cb       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   84482de019341       kube-apiserver-old-k8s-version-378086            kube-system
	
	
	==> coredns [3c42b937421338466200e60e96d69686288069898351e5d8bd5f9d3a6dcfe764] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50622 - 65434 "HINFO IN 2480388945125585251.5524607998298780146. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036321143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-378086
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-378086
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-378086
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_13_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:13:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-378086
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:15:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:14:58 +0000   Sun, 23 Nov 2025 11:13:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-378086
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                4336eb7a-3e7c-4f09-a2a9-ee819430f43e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-5dd5756b68-lr4ln                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-old-k8s-version-378086                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-99vxv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-old-k8s-version-378086             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-378086    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-p546f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-old-k8s-version-378086             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4pwv9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-p96px             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m16s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m16s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x8 over 2m16s)  kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           116s                   node-controller  Node old-k8s-version-378086 event: Registered Node old-k8s-version-378086 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-378086 status is now: NodeReady
	  Normal  Starting                 66s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node old-k8s-version-378086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node old-k8s-version-378086 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node old-k8s-version-378086 event: Registered Node old-k8s-version-378086 in Controller
	
	
	==> dmesg <==
	[Nov23 10:53] overlayfs: idmapped layers are currently not supported
	[Nov23 10:54] overlayfs: idmapped layers are currently not supported
	[Nov23 10:55] overlayfs: idmapped layers are currently not supported
	[Nov23 10:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e72df448160ac085b2167283e8c8a22496db5a4654f14b4aee7f1b6b959124f9] <==
	{"level":"info","ts":"2025-11-23T11:14:22.646734Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T11:14:22.64688Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T11:14:22.650279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T11:14:22.650508Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T11:14:22.656911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:14:22.656968Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T11:14:22.675872Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T11:14:22.693852Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T11:14:22.697582Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T11:14:22.710957Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T11:14:22.711014Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T11:14:24.001207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T11:14:24.001374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T11:14:24.001463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T11:14:24.001517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.001551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.001606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.00164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T11:14:24.005324Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-378086 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T11:14:24.005593Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T11:14:24.005744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T11:14:24.005855Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T11:14:24.00681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T11:14:24.009483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T11:14:24.01051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 11:15:27 up  3:57,  0 user,  load average: 1.80, 3.09, 2.69
	Linux old-k8s-version-378086 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [41652b70682024357c15f7e082dfdfdb23f995e78049b69dcbd577a6cfe04c4a] <==
	I1123 11:14:28.360566       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:14:28.374201       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:14:28.374343       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:14:28.374355       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:14:28.374371       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:14:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:14:28.561781       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:14:28.561799       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:14:28.561810       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:14:28.562099       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:14:58.561395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:14:58.562404       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:14:58.562422       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:14:58.562504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:15:00.462065       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:15:00.462099       1 metrics.go:72] Registering metrics
	I1123 11:15:00.462180       1 controller.go:711] "Syncing nftables rules"
	I1123 11:15:08.561608       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:15:08.561675       1 main.go:301] handling current node
	I1123 11:15:18.561378       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:15:18.561445       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0dbe5418b22cba14abfbf3c40f46993c2e2412f743c50e0de11a3896cf3963ae] <==
	I1123 11:14:27.052854       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 11:14:27.053158       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 11:14:27.053181       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 11:14:27.053714       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 11:14:27.053778       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 11:14:27.054721       1 aggregator.go:166] initial CRD sync complete...
	I1123 11:14:27.054744       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 11:14:27.054751       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:14:27.085641       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 11:14:27.093965       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:14:27.139729       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 11:14:27.140866       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:14:27.158128       1 cache.go:39] Caches are synced for autoregister controller
	E1123 11:14:27.158490       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:14:27.560165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:14:29.492344       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 11:14:29.591046       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 11:14:29.620863       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:14:29.634402       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:14:29.647885       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 11:14:29.709445       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.126.65"}
	I1123 11:14:29.731124       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.140.149"}
	I1123 11:14:39.954798       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 11:14:40.058092       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 11:14:40.163618       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8d4aa54773f5ab9861e6928e7c4b9c58106a13aedd25d90798c12d0368069679] <==
	I1123 11:14:40.047937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.086µs"
	I1123 11:14:40.066171       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1123 11:14:40.073150       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1123 11:14:40.091805       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4pwv9"
	I1123 11:14:40.091841       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-p96px"
	I1123 11:14:40.120805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.599317ms"
	I1123 11:14:40.121853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.380058ms"
	I1123 11:14:40.137350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.483046ms"
	I1123 11:14:40.137475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.691µs"
	I1123 11:14:40.144327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.262µs"
	I1123 11:14:40.150046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.134272ms"
	I1123 11:14:40.150797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.151µs"
	I1123 11:14:40.175408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.522µs"
	I1123 11:14:40.223525       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 11:14:40.246131       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 11:14:40.246165       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 11:14:44.783419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.405µs"
	I1123 11:14:45.858902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="213.098µs"
	I1123 11:14:46.807125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.082µs"
	I1123 11:14:49.837574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.446765ms"
	I1123 11:14:49.838029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="131.522µs"
	I1123 11:15:05.855211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.111µs"
	I1123 11:15:07.756025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.878468ms"
	I1123 11:15:07.756123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.427µs"
	I1123 11:15:10.420734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.285µs"
	
	
	==> kube-proxy [df6da468794be21cefbc6cb802bef7733829bfed7b575a64f34d2e62f4b2d0db] <==
	I1123 11:14:29.139661       1 server_others.go:69] "Using iptables proxy"
	I1123 11:14:29.159650       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 11:14:29.308757       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:14:29.310768       1 server_others.go:152] "Using iptables Proxier"
	I1123 11:14:29.310811       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 11:14:29.310831       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 11:14:29.310864       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 11:14:29.311097       1 server.go:846] "Version info" version="v1.28.0"
	I1123 11:14:29.311113       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:14:29.314201       1 config.go:188] "Starting service config controller"
	I1123 11:14:29.314240       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 11:14:29.314260       1 config.go:97] "Starting endpoint slice config controller"
	I1123 11:14:29.314264       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 11:14:29.314754       1 config.go:315] "Starting node config controller"
	I1123 11:14:29.314772       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 11:14:29.454819       1 shared_informer.go:318] Caches are synced for service config
	I1123 11:14:29.454881       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 11:14:29.515795       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6ec5ddca657b65a61643f5d32fc6ec019a0ca1e01feaeeaa22c3128b331fb1cf] <==
	I1123 11:14:26.477759       1 serving.go:348] Generated self-signed cert in-memory
	I1123 11:14:29.511055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 11:14:29.511092       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:14:29.519312       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 11:14:29.519406       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1123 11:14:29.519418       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1123 11:14:29.519436       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 11:14:29.521128       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:14:29.521157       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 11:14:29.521175       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:14:29.521180       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 11:14:29.619744       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1123 11:14:29.622155       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 11:14:29.622239       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: I1123 11:14:40.241399     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cd03be4f-d3cf-411d-81a7-5042463abcd6-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4pwv9\" (UID: \"cd03be4f-d3cf-411d-81a7-5042463abcd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9"
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: I1123 11:14:40.241462     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwdg\" (UniqueName: \"kubernetes.io/projected/2df082a1-1ad6-44e1-8263-c77434c26762-kube-api-access-2mwdg\") pod \"kubernetes-dashboard-8694d4445c-p96px\" (UID: \"2df082a1-1ad6-44e1-8263-c77434c26762\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p96px"
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: I1123 11:14:40.241492     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmqr\" (UniqueName: \"kubernetes.io/projected/cd03be4f-d3cf-411d-81a7-5042463abcd6-kube-api-access-rjmqr\") pod \"dashboard-metrics-scraper-5f989dc9cf-4pwv9\" (UID: \"cd03be4f-d3cf-411d-81a7-5042463abcd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9"
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: W1123 11:14:40.431770     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa WatchSource:0}: Error finding container a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa: Status 404 returned error can't find the container with id a4568087e122d14a6625f0445a15e732a4c14121b2ed31dbf1bcf04e92d29dfa
	Nov 23 11:14:40 old-k8s-version-378086 kubelet[787]: W1123 11:14:40.444811     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c67933f5eb0c3e99ce90536d72838792c6d486e9817ab07ee0e15296879f8388/crio-b2e2e1f69e355f0d36868561dc094a2616923b68069da9c2001f0c0e8f59dc64 WatchSource:0}: Error finding container b2e2e1f69e355f0d36868561dc094a2616923b68069da9c2001f0c0e8f59dc64: Status 404 returned error can't find the container with id b2e2e1f69e355f0d36868561dc094a2616923b68069da9c2001f0c0e8f59dc64
	Nov 23 11:14:44 old-k8s-version-378086 kubelet[787]: I1123 11:14:44.768678     787 scope.go:117] "RemoveContainer" containerID="23579ad4ed799e148f0206277c2ba17a85c4ebfc6fd76ef84b36bc714f8f9e05"
	Nov 23 11:14:45 old-k8s-version-378086 kubelet[787]: I1123 11:14:45.775636     787 scope.go:117] "RemoveContainer" containerID="23579ad4ed799e148f0206277c2ba17a85c4ebfc6fd76ef84b36bc714f8f9e05"
	Nov 23 11:14:45 old-k8s-version-378086 kubelet[787]: I1123 11:14:45.776603     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:14:45 old-k8s-version-378086 kubelet[787]: E1123 11:14:45.776922     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:14:46 old-k8s-version-378086 kubelet[787]: I1123 11:14:46.780707     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:14:46 old-k8s-version-378086 kubelet[787]: E1123 11:14:46.781434     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:14:50 old-k8s-version-378086 kubelet[787]: I1123 11:14:50.405667     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:14:50 old-k8s-version-378086 kubelet[787]: E1123 11:14:50.406226     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:14:58 old-k8s-version-378086 kubelet[787]: I1123 11:14:58.813878     787 scope.go:117] "RemoveContainer" containerID="6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada"
	Nov 23 11:14:58 old-k8s-version-378086 kubelet[787]: I1123 11:14:58.845604     787 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p96px" podStartSLOduration=9.91895456 podCreationTimestamp="2025-11-23 11:14:40 +0000 UTC" firstStartedPulling="2025-11-23 11:14:40.447878415 +0000 UTC m=+19.014881780" lastFinishedPulling="2025-11-23 11:14:49.37239109 +0000 UTC m=+27.939394464" observedRunningTime="2025-11-23 11:14:49.812825542 +0000 UTC m=+28.379828916" watchObservedRunningTime="2025-11-23 11:14:58.843467244 +0000 UTC m=+37.410470610"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: I1123 11:15:05.653901     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: I1123 11:15:05.834672     787 scope.go:117] "RemoveContainer" containerID="29ebbcd7ba763a1fbfede38143c9e1185b19a7aa62dfe12b9572d585d57d1ead"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: I1123 11:15:05.835010     787 scope.go:117] "RemoveContainer" containerID="070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	Nov 23 11:15:05 old-k8s-version-378086 kubelet[787]: E1123 11:15:05.835363     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:15:10 old-k8s-version-378086 kubelet[787]: I1123 11:15:10.404717     787 scope.go:117] "RemoveContainer" containerID="070e088d6ab1bb07083d1f9f5e8b610be8731b3d87ba3b5214909087ac96b9a1"
	Nov 23 11:15:10 old-k8s-version-378086 kubelet[787]: E1123 11:15:10.405097     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4pwv9_kubernetes-dashboard(cd03be4f-d3cf-411d-81a7-5042463abcd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4pwv9" podUID="cd03be4f-d3cf-411d-81a7-5042463abcd6"
	Nov 23 11:15:21 old-k8s-version-378086 kubelet[787]: I1123 11:15:21.565914     787 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 11:15:21 old-k8s-version-378086 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:15:21 old-k8s-version-378086 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:15:21 old-k8s-version-378086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4c5050d05088c8d4aa155ed1ef8c68b82a7e47e3df5aea08651a337b5ecd164f] <==
	2025/11/23 11:14:49 Using namespace: kubernetes-dashboard
	2025/11/23 11:14:49 Using in-cluster config to connect to apiserver
	2025/11/23 11:14:49 Using secret token for csrf signing
	2025/11/23 11:14:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:14:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:14:49 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 11:14:49 Generating JWE encryption key
	2025/11/23 11:14:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:14:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:14:50 Initializing JWE encryption key from synchronized object
	2025/11/23 11:14:50 Creating in-cluster Sidecar client
	2025/11/23 11:14:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:14:50 Serving insecurely on HTTP port: 9090
	2025/11/23 11:15:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:14:49 Starting overwatch
	
	
	==> storage-provisioner [6f712ec8b4c0cc3af7f67620e9d706b4caf4cf53a50a4a00d7d3f0d544d7fada] <==
	I1123 11:14:28.654093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:14:58.656746       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [72eefe4998ad926e91ba0b4aeaa70f2824e1d1d4509369827c4a7c5dda6c05e4] <==
	I1123 11:14:58.875527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:14:58.889249       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:14:58.889300       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 11:15:16.285773       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:15:16.285943       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-378086_d365c105-0a59-4ab4-82c0-06aff9e1c616!
	I1123 11:15:16.286860       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96c35a90-0779-45d9-8ae6-4ff1ea7116b2", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-378086_d365c105-0a59-4ab4-82c0-06aff9e1c616 became leader
	I1123 11:15:16.386522       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-378086_d365c105-0a59-4ab4-82c0-06aff9e1c616!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-378086 -n old-k8s-version-378086
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-378086 -n old-k8s-version-378086: exit status 2 (398.607213ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-378086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (303.543147ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:17:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-258179 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-258179 describe deploy/metrics-server -n kube-system: exit status 1 (87.040693ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-258179 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-258179
helpers_test.go:243: (dbg) docker inspect no-preload-258179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec",
	        "Created": "2025-11-23T11:15:32.709473146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 721438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:15:32.794039969Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/hosts",
	        "LogPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec-json.log",
	        "Name": "/no-preload-258179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-258179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-258179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec",
	                "LowerDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-258179",
	                "Source": "/var/lib/docker/volumes/no-preload-258179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-258179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-258179",
	                "name.minikube.sigs.k8s.io": "no-preload-258179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a80b27a9bbd65054fd3b6344349582a0f470998b10408096314b35120d990cf7",
	            "SandboxKey": "/var/run/docker/netns/a80b27a9bbd6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-258179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:cb:85:3e:d2:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "21820889903cdda52be85d36791838a2563a18a74e774bdfd134f439e013fcbd",
	                    "EndpointID": "b4949b73998aab519d9e221ca9eacbbd12df339c8ccfcae5c9233bfebbfd1f43",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-258179",
	                        "e9516afbc973"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-258179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-258179 logs -n 25: (1.331575795s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-344709 sudo crio config                                                                                                                                                                                                             │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p cilium-344709                                                                                                                                                                                                                              │ cilium-344709            │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ pause   │ -p pause-851396 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │                     │
	│ delete  │ -p pause-851396                                                                                                                                                                                                                               │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p force-systemd-env-613417                                                                                                                                                                                                                   │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p cert-options-700578 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ cert-options-700578 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ -p cert-options-700578 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p cert-options-700578                                                                                                                                                                                                                        │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:13 UTC │                     │
	│ stop    │ -p old-k8s-version-378086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-378086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179        │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:16 UTC │
	│ delete  │ -p cert-expiration-629387                                                                                                                                                                                                                     │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179        │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:15:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:15:51.054936  724363 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:15:51.055058  724363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:15:51.055063  724363 out.go:374] Setting ErrFile to fd 2...
	I1123 11:15:51.055068  724363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:15:51.055424  724363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:15:51.055893  724363 out.go:368] Setting JSON to false
	I1123 11:15:51.056851  724363 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14300,"bootTime":1763882251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:15:51.056934  724363 start.go:143] virtualization:  
	I1123 11:15:51.064664  724363 out.go:179] * [embed-certs-715679] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:15:51.068251  724363 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:15:51.068416  724363 notify.go:221] Checking for updates...
	I1123 11:15:51.076226  724363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:15:51.079685  724363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:15:51.082981  724363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:15:51.087236  724363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:15:51.090441  724363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:15:51.094134  724363 config.go:182] Loaded profile config "no-preload-258179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:15:51.094228  724363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:15:51.146012  724363 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:15:51.146157  724363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:15:51.251615  724363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 11:15:51.241882868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:15:51.251720  724363 docker.go:319] overlay module found
	I1123 11:15:51.255456  724363 out.go:179] * Using the docker driver based on user configuration
	I1123 11:15:51.258731  724363 start.go:309] selected driver: docker
	I1123 11:15:51.258753  724363 start.go:927] validating driver "docker" against <nil>
	I1123 11:15:51.258768  724363 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:15:51.259503  724363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:15:51.347054  724363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 11:15:51.335327026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:15:51.347201  724363 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 11:15:51.347424  724363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:15:51.350930  724363 out.go:179] * Using Docker driver with root privileges
	I1123 11:15:51.355690  724363 cni.go:84] Creating CNI manager for ""
	I1123 11:15:51.355771  724363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:15:51.355786  724363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 11:15:51.355869  724363 start.go:353] cluster config:
	{Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:15:51.359042  724363 out.go:179] * Starting "embed-certs-715679" primary control-plane node in "embed-certs-715679" cluster
	I1123 11:15:51.361907  724363 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:15:51.364987  724363 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:15:51.367977  724363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:15:51.368026  724363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:15:51.368035  724363 cache.go:65] Caching tarball of preloaded images
	I1123 11:15:51.368124  724363 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:15:51.368134  724363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:15:51.368245  724363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/config.json ...
	I1123 11:15:51.368262  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/config.json: {Name:mkcdf115b441d65e30b30ee132ac6249693056ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:15:51.368411  724363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:15:51.389881  724363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:15:51.389905  724363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:15:51.389920  724363 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:15:51.389951  724363 start.go:360] acquireMachinesLock for embed-certs-715679: {Name:mkb7d2190da17f9715c804089887bdf6adc5f2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:15:51.390054  724363 start.go:364] duration metric: took 82.725µs to acquireMachinesLock for "embed-certs-715679"
	I1123 11:15:51.390085  724363 start.go:93] Provisioning new machine with config: &{Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:15:51.390155  724363 start.go:125] createHost starting for "" (driver="docker")
	I1123 11:15:48.286396  721133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.819161128s)
	I1123 11:15:48.286419  721133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 11:15:48.286436  721133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 11:15:48.286482  721133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 11:15:49.839053  721133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.552550249s)
	I1123 11:15:49.839081  721133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 11:15:49.839119  721133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 11:15:49.839169  721133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 11:15:51.393640  724363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 11:15:51.393881  724363 start.go:159] libmachine.API.Create for "embed-certs-715679" (driver="docker")
	I1123 11:15:51.393921  724363 client.go:173] LocalClient.Create starting
	I1123 11:15:51.394012  724363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 11:15:51.394045  724363 main.go:143] libmachine: Decoding PEM data...
	I1123 11:15:51.394061  724363 main.go:143] libmachine: Parsing certificate...
	I1123 11:15:51.394118  724363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 11:15:51.394148  724363 main.go:143] libmachine: Decoding PEM data...
	I1123 11:15:51.394164  724363 main.go:143] libmachine: Parsing certificate...
	I1123 11:15:51.394536  724363 cli_runner.go:164] Run: docker network inspect embed-certs-715679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 11:15:51.409023  724363 cli_runner.go:211] docker network inspect embed-certs-715679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 11:15:51.409102  724363 network_create.go:284] running [docker network inspect embed-certs-715679] to gather additional debugging logs...
	I1123 11:15:51.409122  724363 cli_runner.go:164] Run: docker network inspect embed-certs-715679
	W1123 11:15:51.427058  724363 cli_runner.go:211] docker network inspect embed-certs-715679 returned with exit code 1
	I1123 11:15:51.427102  724363 network_create.go:287] error running [docker network inspect embed-certs-715679]: docker network inspect embed-certs-715679: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-715679 not found
	I1123 11:15:51.427116  724363 network_create.go:289] output of [docker network inspect embed-certs-715679]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-715679 not found
	
	** /stderr **
	I1123 11:15:51.427214  724363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:15:51.444162  724363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
	I1123 11:15:51.444520  724363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6aa8d6e10592 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:61:e9:d9:d2:34} reservation:<nil>}
	I1123 11:15:51.444893  724363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b955e06248a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:f3:13:23:8c:71} reservation:<nil>}
	I1123 11:15:51.445322  724363 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a002e0}
	I1123 11:15:51.445347  724363 network_create.go:124] attempt to create docker network embed-certs-715679 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 11:15:51.445476  724363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-715679 embed-certs-715679
	I1123 11:15:51.513224  724363 network_create.go:108] docker network embed-certs-715679 192.168.76.0/24 created
	I1123 11:15:51.513260  724363 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-715679" container
	I1123 11:15:51.513331  724363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 11:15:51.535031  724363 cli_runner.go:164] Run: docker volume create embed-certs-715679 --label name.minikube.sigs.k8s.io=embed-certs-715679 --label created_by.minikube.sigs.k8s.io=true
	I1123 11:15:51.558418  724363 oci.go:103] Successfully created a docker volume embed-certs-715679
	I1123 11:15:51.558497  724363 cli_runner.go:164] Run: docker run --rm --name embed-certs-715679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-715679 --entrypoint /usr/bin/test -v embed-certs-715679:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 11:15:52.392907  724363 oci.go:107] Successfully prepared a docker volume embed-certs-715679
	I1123 11:15:52.392967  724363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:15:52.392982  724363 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 11:15:52.393053  724363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-715679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 11:15:51.805351  721133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.966154622s)
	I1123 11:15:51.805388  721133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 11:15:51.805428  721133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 11:15:51.805477  721133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1123 11:15:54.299480  721133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.493980882s)
	I1123 11:15:54.299517  721133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 11:15:54.299542  721133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 11:15:54.299618  721133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1123 11:15:55.145160  721133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 11:15:55.145198  721133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 11:15:55.145246  721133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1123 11:15:57.687713  724363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-715679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.294624439s)
	I1123 11:15:57.687750  724363 kic.go:203] duration metric: took 5.294764224s to extract preloaded images to volume ...
	W1123 11:15:57.687884  724363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:15:57.688020  724363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:15:57.781441  724363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-715679 --name embed-certs-715679 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-715679 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-715679 --network embed-certs-715679 --ip 192.168.76.2 --volume embed-certs-715679:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:15:58.233654  724363 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Running}}
	I1123 11:15:58.273182  724363 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:15:58.303206  724363 cli_runner.go:164] Run: docker exec embed-certs-715679 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:15:58.381205  724363 oci.go:144] the created container "embed-certs-715679" has a running status.
	I1123 11:15:58.381241  724363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa...
	I1123 11:15:58.631107  724363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:15:58.658892  724363 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:15:58.688300  724363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:15:58.688321  724363 kic_runner.go:114] Args: [docker exec --privileged embed-certs-715679 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:15:58.755118  724363 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:15:58.780410  724363 machine.go:94] provisionDockerMachine start ...
	I1123 11:15:58.780506  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:15:58.807124  724363 main.go:143] libmachine: Using SSH client type: native
	I1123 11:15:58.807467  724363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1123 11:15:58.807482  724363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:15:58.808211  724363 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:15:59.723083  721133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.577811669s)
	I1123 11:15:59.723110  721133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 11:15:59.723130  721133 cache_images.go:125] Successfully loaded all cached images
	I1123 11:15:59.723135  721133 cache_images.go:94] duration metric: took 18.691073272s to LoadCachedImages
	I1123 11:15:59.723143  721133 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 11:15:59.723228  721133 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-258179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-258179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:15:59.723318  721133 ssh_runner.go:195] Run: crio config
	I1123 11:15:59.790200  721133 cni.go:84] Creating CNI manager for ""
	I1123 11:15:59.790226  721133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:15:59.790274  721133 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:15:59.790310  721133 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-258179 NodeName:no-preload-258179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:15:59.790456  721133 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-258179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:15:59.790536  721133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:15:59.798347  721133 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 11:15:59.798413  721133 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 11:15:59.806129  721133 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 11:15:59.806221  721133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 11:15:59.806995  721133 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 11:15:59.806997  721133 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 11:15:59.810498  721133 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 11:15:59.810531  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 11:16:00.965031  721133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:16:00.997527  721133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 11:16:01.013291  721133 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 11:16:01.013330  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 11:16:01.133230  721133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 11:16:01.147719  721133 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 11:16:01.147767  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 11:16:01.762037  721133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:16:01.769447  721133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 11:16:01.783061  721133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:16:01.798348  721133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 11:16:01.812710  721133 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:16:01.816680  721133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:16:01.827402  721133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:16:01.968871  721133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:16:01.999775  721133 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179 for IP: 192.168.85.2
	I1123 11:16:01.999799  721133 certs.go:195] generating shared ca certs ...
	I1123 11:16:01.999818  721133 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:01.999986  721133 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:16:02.000046  721133 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:16:02.000054  721133 certs.go:257] generating profile certs ...
	I1123 11:16:02.000124  721133 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.key
	I1123 11:16:02.000137  721133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt with IP's: []
	I1123 11:16:02.139561  721133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt ...
	I1123 11:16:02.139639  721133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt: {Name:mk6606ec67859325004dc47b28bd8293e3519795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:02.140593  721133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.key ...
	I1123 11:16:02.140612  721133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.key: {Name:mk49e1a59205b7947bdc40c5f6c8acb9ee113421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:02.140788  721133 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key.016482d5
	I1123 11:16:02.140804  721133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.crt.016482d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 11:16:02.489902  721133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.crt.016482d5 ...
	I1123 11:16:02.489941  721133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.crt.016482d5: {Name:mk42fefe15e3200ca5adccabe3fa1ef6d0445037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:02.490153  721133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key.016482d5 ...
	I1123 11:16:02.490166  721133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key.016482d5: {Name:mk6dc264a54e41b81560da8c57a3d957a5d03da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:02.490253  721133 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.crt.016482d5 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.crt
	I1123 11:16:02.490331  721133 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key.016482d5 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key
	I1123 11:16:02.490400  721133 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.key
	I1123 11:16:02.490414  721133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.crt with IP's: []
	I1123 11:16:02.621609  721133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.crt ...
	I1123 11:16:02.621643  721133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.crt: {Name:mkab6c61e9a5c2d6b1bbd2673350d6489cd9f123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:02.621866  721133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.key ...
	I1123 11:16:02.621881  721133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.key: {Name:mk4c95a32fb72ccdb6a5bb119dfa1ff4c95af20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:02.622098  721133 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:16:02.622148  721133 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:16:02.622161  721133 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:16:02.622189  721133 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:16:02.622217  721133 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:16:02.622255  721133 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:16:02.622305  721133 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:16:02.622891  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:16:02.647670  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:16:02.674624  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:16:02.698086  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:16:02.718166  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:16:02.736915  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:16:02.755978  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:16:02.775015  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:16:02.794699  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:16:02.814564  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:16:02.854768  721133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:16:02.894885  721133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:16:02.908290  721133 ssh_runner.go:195] Run: openssl version
	I1123 11:16:02.918506  721133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:16:02.927279  721133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:16:02.931354  721133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:16:02.931420  721133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:16:02.975363  721133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:16:02.983893  721133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:16:02.992285  721133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:16:02.996273  721133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:16:02.996363  721133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:16:03.043545  721133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:16:03.056722  721133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:16:03.067953  721133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:16:03.072082  721133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:16:03.072150  721133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:16:03.125025  721133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:16:03.134123  721133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:16:03.138854  721133 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:16:03.138949  721133 kubeadm.go:401] StartCluster: {Name:no-preload-258179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-258179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:16:03.139055  721133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:16:03.139156  721133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:16:03.169373  721133 cri.go:89] found id: ""
	I1123 11:16:03.169470  721133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:16:03.180170  721133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:16:03.188832  721133 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:16:03.188895  721133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:16:03.198916  721133 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:16:03.198937  721133 kubeadm.go:158] found existing configuration files:
	
	I1123 11:16:03.198995  721133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 11:16:03.207958  721133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:16:03.208030  721133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:16:03.216022  721133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 11:16:03.226149  721133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:16:03.226222  721133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:16:03.236703  721133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 11:16:03.245167  721133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:16:03.245233  721133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:16:03.255861  721133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 11:16:03.263976  721133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:16:03.264045  721133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:16:03.272526  721133 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:16:03.317230  721133 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 11:16:03.317545  721133 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:16:03.354499  721133 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:16:03.354576  721133 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:16:03.354616  721133 kubeadm.go:319] OS: Linux
	I1123 11:16:03.354668  721133 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:16:03.354722  721133 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:16:03.354773  721133 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:16:03.354825  721133 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:16:03.354882  721133 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:16:03.354942  721133 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:16:03.354992  721133 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:16:03.355042  721133 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:16:03.355092  721133 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:16:03.454020  721133 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:16:03.454131  721133 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:16:03.454226  721133 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 11:16:03.473584  721133 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:16:01.977772  724363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-715679
	
	I1123 11:16:01.977795  724363 ubuntu.go:182] provisioning hostname "embed-certs-715679"
	I1123 11:16:01.977862  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:02.019300  724363 main.go:143] libmachine: Using SSH client type: native
	I1123 11:16:02.019632  724363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1123 11:16:02.019651  724363 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-715679 && echo "embed-certs-715679" | sudo tee /etc/hostname
	I1123 11:16:02.195819  724363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-715679
	
	I1123 11:16:02.195915  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:02.216554  724363 main.go:143] libmachine: Using SSH client type: native
	I1123 11:16:02.216892  724363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1123 11:16:02.216913  724363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-715679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-715679/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-715679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:16:02.377854  724363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:16:02.377885  724363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:16:02.377918  724363 ubuntu.go:190] setting up certificates
	I1123 11:16:02.377927  724363 provision.go:84] configureAuth start
	I1123 11:16:02.377986  724363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:16:02.400104  724363 provision.go:143] copyHostCerts
	I1123 11:16:02.400172  724363 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:16:02.400182  724363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:16:02.400259  724363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:16:02.400359  724363 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:16:02.400364  724363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:16:02.400390  724363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:16:02.400448  724363 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:16:02.400453  724363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:16:02.400476  724363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:16:02.400530  724363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.embed-certs-715679 san=[127.0.0.1 192.168.76.2 embed-certs-715679 localhost minikube]
	I1123 11:16:02.828521  724363 provision.go:177] copyRemoteCerts
	I1123 11:16:02.828629  724363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:16:02.828713  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:02.855661  724363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:16:02.966319  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:16:02.990606  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:16:03.015833  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:16:03.036380  724363 provision.go:87] duration metric: took 658.43127ms to configureAuth
	I1123 11:16:03.036409  724363 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:16:03.036590  724363 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:16:03.036723  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:03.058689  724363 main.go:143] libmachine: Using SSH client type: native
	I1123 11:16:03.058991  724363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1123 11:16:03.059005  724363 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:16:03.411466  724363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:16:03.411556  724363 machine.go:97] duration metric: took 4.631127342s to provisionDockerMachine
	I1123 11:16:03.411581  724363 client.go:176] duration metric: took 12.017651747s to LocalClient.Create
	I1123 11:16:03.411649  724363 start.go:167] duration metric: took 12.017756971s to libmachine.API.Create "embed-certs-715679"
	I1123 11:16:03.411677  724363 start.go:293] postStartSetup for "embed-certs-715679" (driver="docker")
	I1123 11:16:03.411710  724363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:16:03.411800  724363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:16:03.411875  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:03.437533  724363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:16:03.554242  724363 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:16:03.558078  724363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:16:03.558106  724363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:16:03.558118  724363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:16:03.558174  724363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:16:03.558257  724363 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:16:03.558359  724363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:16:03.566196  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:16:03.585667  724363 start.go:296] duration metric: took 173.953744ms for postStartSetup
	I1123 11:16:03.586038  724363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:16:03.607293  724363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/config.json ...
	I1123 11:16:03.607578  724363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:16:03.607627  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:03.633595  724363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:16:03.752794  724363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:16:03.759697  724363 start.go:128] duration metric: took 12.369518627s to createHost
	I1123 11:16:03.759729  724363 start.go:83] releasing machines lock for "embed-certs-715679", held for 12.369662153s
	I1123 11:16:03.759820  724363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:16:03.785106  724363 ssh_runner.go:195] Run: cat /version.json
	I1123 11:16:03.785189  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:03.785499  724363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:16:03.785570  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:03.819036  724363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:16:03.844091  724363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:16:03.933271  724363 ssh_runner.go:195] Run: systemctl --version
	I1123 11:16:04.034355  724363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:16:04.078161  724363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:16:04.083294  724363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:16:04.083436  724363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:16:04.117182  724363 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 11:16:04.117257  724363 start.go:496] detecting cgroup driver to use...
	I1123 11:16:04.117303  724363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:16:04.117376  724363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:16:04.143948  724363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:16:04.159224  724363 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:16:04.159340  724363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:16:04.178236  724363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:16:04.197361  724363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:16:04.360591  724363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:16:04.580999  724363 docker.go:234] disabling docker service ...
	I1123 11:16:04.581174  724363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:16:04.619028  724363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:16:04.633546  724363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:16:04.776711  724363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:16:04.932202  724363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:16:04.947015  724363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:16:04.961395  724363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:16:04.961544  724363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:16:04.971209  724363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:16:04.971320  724363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:16:04.980394  724363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:16:04.989651  724363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:16:04.999097  724363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:16:05.008836  724363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:16:05.019547  724363 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:16:05.033694  724363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:16:05.043449  724363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:16:05.051503  724363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:16:05.062219  724363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:16:05.243127  724363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:16:05.574250  724363 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:16:05.574386  724363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:16:05.578931  724363 start.go:564] Will wait 60s for crictl version
	I1123 11:16:05.579060  724363 ssh_runner.go:195] Run: which crictl
	I1123 11:16:05.583155  724363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:16:05.631607  724363 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:16:05.631705  724363 ssh_runner.go:195] Run: crio --version
	I1123 11:16:05.691124  724363 ssh_runner.go:195] Run: crio --version
	I1123 11:16:05.729555  724363 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:16:05.732584  724363 cli_runner.go:164] Run: docker network inspect embed-certs-715679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:16:05.755165  724363 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:16:05.759961  724363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:16:05.771273  724363 kubeadm.go:884] updating cluster {Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:16:05.771407  724363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:16:05.771467  724363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:16:05.835948  724363 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:16:05.835972  724363 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:16:05.836039  724363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:16:05.893816  724363 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:16:05.893838  724363 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:16:05.893846  724363 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:16:05.893949  724363 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-715679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:16:05.894026  724363 ssh_runner.go:195] Run: crio config
	I1123 11:16:05.985300  724363 cni.go:84] Creating CNI manager for ""
	I1123 11:16:05.985321  724363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:16:05.985339  724363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:16:05.985362  724363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-715679 NodeName:embed-certs-715679 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:16:05.985524  724363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-715679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:16:05.985599  724363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:16:05.993720  724363 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:16:05.993789  724363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:16:06.002513  724363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 11:16:06.021539  724363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:16:06.038140  724363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 11:16:06.053659  724363 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:16:03.479166  721133 out.go:252]   - Generating certificates and keys ...
	I1123 11:16:03.479265  721133 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:16:03.479339  721133 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 11:16:04.012835  721133 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:16:04.173761  721133 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:16:05.591868  721133 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:16:06.129794  721133 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:16:06.058522  724363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:16:06.072302  724363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:16:06.252471  724363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:16:06.277515  724363 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679 for IP: 192.168.76.2
	I1123 11:16:06.277596  724363 certs.go:195] generating shared ca certs ...
	I1123 11:16:06.277644  724363 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:06.277883  724363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:16:06.278006  724363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:16:06.278045  724363 certs.go:257] generating profile certs ...
	I1123 11:16:06.278199  724363 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.key
	I1123 11:16:06.278234  724363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.crt with IP's: []
	I1123 11:16:06.438031  724363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.crt ...
	I1123 11:16:06.438145  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.crt: {Name:mk6d671b6868d0006a1f8dc8264dcf60373ecade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:06.438471  724363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.key ...
	I1123 11:16:06.438529  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.key: {Name:mka176463a995867cf8ab1c6f2690fb88dc94233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:06.438731  724363 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key.2c6e1eca
	I1123 11:16:06.438788  724363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt.2c6e1eca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 11:16:06.693157  724363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt.2c6e1eca ...
	I1123 11:16:06.693233  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt.2c6e1eca: {Name:mk47564378e470fd00d29be108686576e52fa041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:06.693473  724363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key.2c6e1eca ...
	I1123 11:16:06.693515  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key.2c6e1eca: {Name:mk07a4c01c0a9a00cec34e77b54a6e7d7c8c9258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:06.693665  724363 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt.2c6e1eca -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt
	I1123 11:16:06.693802  724363 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key.2c6e1eca -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key
	I1123 11:16:06.693892  724363 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key
	I1123 11:16:06.693944  724363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.crt with IP's: []
	I1123 11:16:07.099484  724363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.crt ...
	I1123 11:16:07.099520  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.crt: {Name:mkd694ea9e49d2dab34f1a324ab97eed64d5b477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:07.099692  724363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key ...
	I1123 11:16:07.099702  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key: {Name:mk2aa6a5e1aa4c05bec2a117f8ecdc84682b752c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:07.099879  724363 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:16:07.099922  724363 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:16:07.099932  724363 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:16:07.099959  724363 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:16:07.099984  724363 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:16:07.100010  724363 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:16:07.100054  724363 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:16:07.100746  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:16:07.121308  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:16:07.140516  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:16:07.160576  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:16:07.180194  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 11:16:07.199735  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:16:07.219728  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:16:07.239780  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:16:07.259643  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:16:07.280376  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:16:07.301048  724363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:16:07.320509  724363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:16:07.334733  724363 ssh_runner.go:195] Run: openssl version
	I1123 11:16:07.341243  724363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:16:07.356344  724363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:16:07.360203  724363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:16:07.360317  724363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:16:07.408331  724363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:16:07.417818  724363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:16:07.427370  724363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:16:07.432086  724363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:16:07.432159  724363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:16:07.474180  724363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:16:07.487168  724363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:16:07.497339  724363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:16:07.501937  724363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:16:07.502109  724363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:16:07.545709  724363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:16:07.555053  724363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:16:07.559738  724363 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:16:07.559896  724363 kubeadm.go:401] StartCluster: {Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:16:07.560017  724363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:16:07.560103  724363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:16:07.591941  724363 cri.go:89] found id: ""
	I1123 11:16:07.592062  724363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:16:07.602671  724363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:16:07.611818  724363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:16:07.611934  724363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:16:07.622980  724363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:16:07.623053  724363 kubeadm.go:158] found existing configuration files:
	
	I1123 11:16:07.623133  724363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 11:16:07.632220  724363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:16:07.632342  724363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:16:07.640409  724363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 11:16:07.649432  724363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:16:07.649545  724363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:16:07.657468  724363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 11:16:07.666384  724363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:16:07.666498  724363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:16:07.674632  724363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 11:16:07.683293  724363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:16:07.683402  724363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:16:07.692317  724363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:16:07.742641  724363 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 11:16:07.743283  724363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:16:07.778982  724363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:16:07.779161  724363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:16:07.779233  724363 kubeadm.go:319] OS: Linux
	I1123 11:16:07.779318  724363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:16:07.779406  724363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:16:07.779491  724363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:16:07.779574  724363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:16:07.779655  724363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:16:07.779735  724363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:16:07.779814  724363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:16:07.779901  724363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:16:07.779982  724363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:16:07.868297  724363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:16:07.868474  724363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:16:07.868643  724363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 11:16:07.893829  724363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:16:07.899931  724363 out.go:252]   - Generating certificates and keys ...
	I1123 11:16:07.900094  724363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:16:07.900207  724363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 11:16:09.108449  724363 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:16:09.479874  724363 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:16:09.834973  724363 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:16:10.594904  724363 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:16:06.888994  721133 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:16:06.889544  721133 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-258179] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:16:07.803804  721133 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:16:07.804353  721133 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-258179] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:16:08.221757  721133 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:16:08.616483  721133 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:16:08.988594  721133 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:16:08.989610  721133 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:16:09.511136  721133 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:16:09.686163  721133 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 11:16:10.590261  721133 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:16:11.681744  721133 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:16:11.993724  721133 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:16:11.993830  721133 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:16:11.993903  721133 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 11:16:11.864527  724363 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:16:11.864733  724363 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-715679 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:16:12.124032  724363 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:16:12.124679  724363 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-715679 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:16:12.465126  724363 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:16:13.239683  724363 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:16:13.773067  724363 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:16:13.773861  724363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:16:14.590351  724363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:16:14.820871  724363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 11:16:11.997547  721133 out.go:252]   - Booting up control plane ...
	I1123 11:16:11.997666  721133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:16:11.999445  721133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:16:12.005737  721133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:16:12.033784  721133 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:16:12.033894  721133 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 11:16:12.045756  721133 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 11:16:12.045854  721133 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:16:12.045897  721133 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:16:12.220550  721133 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 11:16:12.220681  721133 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 11:16:13.233621  721133 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.011469065s
	I1123 11:16:13.238095  721133 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 11:16:13.238188  721133 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 11:16:13.238278  721133 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 11:16:13.238356  721133 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 11:16:17.191467  724363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:16:17.331155  724363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:16:17.787425  724363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:16:17.788541  724363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:16:17.791581  724363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 11:16:17.795015  724363 out.go:252]   - Booting up control plane ...
	I1123 11:16:17.795114  724363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:16:17.795193  724363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:16:17.796159  724363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:16:17.813821  724363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:16:17.813944  724363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 11:16:17.822525  724363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 11:16:17.822630  724363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:16:17.822674  724363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:16:18.021230  724363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 11:16:18.028042  724363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 11:16:19.033782  724363 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001698873s
	I1123 11:16:19.033896  724363 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 11:16:19.033982  724363 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 11:16:19.034075  724363 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 11:16:19.034158  724363 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 11:16:20.949797  721133 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.703646041s
	I1123 11:16:23.771973  721133 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.532639002s
	I1123 11:16:24.736974  721133 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501306634s
	I1123 11:16:24.776133  721133 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 11:16:24.802860  721133 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 11:16:24.829290  721133 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 11:16:24.829522  721133 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-258179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 11:16:24.852795  721133 kubeadm.go:319] [bootstrap-token] Using token: ea8r82.xrwmp2ysofrd18ba
	I1123 11:16:24.855663  721133 out.go:252]   - Configuring RBAC rules ...
	I1123 11:16:24.855787  721133 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 11:16:24.896876  721133 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 11:16:24.918210  721133 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 11:16:24.926215  721133 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 11:16:24.935949  721133 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 11:16:24.945760  721133 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 11:16:25.145325  721133 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 11:16:25.630042  721133 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 11:16:26.144213  721133 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 11:16:26.145903  721133 kubeadm.go:319] 
	I1123 11:16:26.145986  721133 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 11:16:26.145992  721133 kubeadm.go:319] 
	I1123 11:16:26.146072  721133 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 11:16:26.146076  721133 kubeadm.go:319] 
	I1123 11:16:26.146100  721133 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 11:16:26.146563  721133 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 11:16:26.146628  721133 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 11:16:26.146633  721133 kubeadm.go:319] 
	I1123 11:16:26.146687  721133 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 11:16:26.146691  721133 kubeadm.go:319] 
	I1123 11:16:26.146738  721133 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 11:16:26.146742  721133 kubeadm.go:319] 
	I1123 11:16:26.146794  721133 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 11:16:26.146875  721133 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 11:16:26.146943  721133 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 11:16:26.146947  721133 kubeadm.go:319] 
	I1123 11:16:26.147258  721133 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 11:16:26.147340  721133 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 11:16:26.147345  721133 kubeadm.go:319] 
	I1123 11:16:26.147653  721133 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ea8r82.xrwmp2ysofrd18ba \
	I1123 11:16:26.147761  721133 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 11:16:26.147965  721133 kubeadm.go:319] 	--control-plane 
	I1123 11:16:26.147973  721133 kubeadm.go:319] 
	I1123 11:16:26.148285  721133 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 11:16:26.148294  721133 kubeadm.go:319] 
	I1123 11:16:26.148597  721133 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ea8r82.xrwmp2ysofrd18ba \
	I1123 11:16:26.148939  721133 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 11:16:26.156716  721133 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 11:16:26.156943  721133 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 11:16:26.157055  721133 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 11:16:26.157072  721133 cni.go:84] Creating CNI manager for ""
	I1123 11:16:26.157079  721133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:16:26.160564  721133 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 11:16:26.163390  721133 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 11:16:26.168171  721133 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 11:16:26.168189  721133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 11:16:26.194844  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 11:16:26.307785  724363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.274235624s
	I1123 11:16:27.038003  724363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.004881576s
	I1123 11:16:29.035597  724363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002240619s
	I1123 11:16:29.073290  724363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 11:16:29.102036  724363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 11:16:29.123608  724363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 11:16:29.124095  724363 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-715679 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 11:16:29.139543  724363 kubeadm.go:319] [bootstrap-token] Using token: 32gghh.fioq4jhdyjo2pb1q
	I1123 11:16:26.629612  721133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 11:16:26.629757  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:26.629826  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-258179 minikube.k8s.io/updated_at=2025_11_23T11_16_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=no-preload-258179 minikube.k8s.io/primary=true
	I1123 11:16:26.953906  721133 ops.go:34] apiserver oom_adj: -16
	I1123 11:16:26.954010  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:27.454129  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:27.954637  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:28.454625  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:28.954139  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:29.454304  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:29.954536  721133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:30.188830  721133 kubeadm.go:1114] duration metric: took 3.559132339s to wait for elevateKubeSystemPrivileges
	I1123 11:16:30.188864  721133 kubeadm.go:403] duration metric: took 27.04992166s to StartCluster
	I1123 11:16:30.188890  721133 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:30.188972  721133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:16:30.189742  721133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:30.191204  721133 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:16:30.191297  721133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 11:16:30.191533  721133 config.go:182] Loaded profile config "no-preload-258179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:16:30.191565  721133 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:16:30.191645  721133 addons.go:70] Setting storage-provisioner=true in profile "no-preload-258179"
	I1123 11:16:30.191663  721133 addons.go:239] Setting addon storage-provisioner=true in "no-preload-258179"
	I1123 11:16:30.191685  721133 host.go:66] Checking if "no-preload-258179" exists ...
	I1123 11:16:30.192243  721133 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:16:30.192682  721133 addons.go:70] Setting default-storageclass=true in profile "no-preload-258179"
	I1123 11:16:30.192709  721133 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-258179"
	I1123 11:16:30.193018  721133 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:16:30.198921  721133 out.go:179] * Verifying Kubernetes components...
	I1123 11:16:30.205879  721133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:16:30.231077  721133 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:16:29.142411  724363 out.go:252]   - Configuring RBAC rules ...
	I1123 11:16:29.142550  724363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 11:16:29.150524  724363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 11:16:29.169212  724363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 11:16:29.174780  724363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 11:16:29.181324  724363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 11:16:29.187612  724363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 11:16:29.443734  724363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 11:16:29.887923  724363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 11:16:30.443408  724363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 11:16:30.444854  724363 kubeadm.go:319] 
	I1123 11:16:30.444928  724363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 11:16:30.444933  724363 kubeadm.go:319] 
	I1123 11:16:30.445010  724363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 11:16:30.445014  724363 kubeadm.go:319] 
	I1123 11:16:30.445045  724363 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 11:16:30.445554  724363 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 11:16:30.445613  724363 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 11:16:30.445618  724363 kubeadm.go:319] 
	I1123 11:16:30.445672  724363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 11:16:30.445676  724363 kubeadm.go:319] 
	I1123 11:16:30.445723  724363 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 11:16:30.445727  724363 kubeadm.go:319] 
	I1123 11:16:30.445779  724363 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 11:16:30.445854  724363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 11:16:30.445923  724363 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 11:16:30.445926  724363 kubeadm.go:319] 
	I1123 11:16:30.446239  724363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 11:16:30.446323  724363 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 11:16:30.446328  724363 kubeadm.go:319] 
	I1123 11:16:30.446616  724363 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 32gghh.fioq4jhdyjo2pb1q \
	I1123 11:16:30.446725  724363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 11:16:30.446973  724363 kubeadm.go:319] 	--control-plane 
	I1123 11:16:30.446984  724363 kubeadm.go:319] 
	I1123 11:16:30.447261  724363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 11:16:30.447271  724363 kubeadm.go:319] 
	I1123 11:16:30.447564  724363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 32gghh.fioq4jhdyjo2pb1q \
	I1123 11:16:30.447890  724363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 11:16:30.452346  724363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 11:16:30.452578  724363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 11:16:30.452706  724363 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 11:16:30.452723  724363 cni.go:84] Creating CNI manager for ""
	I1123 11:16:30.452731  724363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:16:30.456111  724363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 11:16:30.459188  724363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 11:16:30.463881  724363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 11:16:30.463899  724363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 11:16:30.508871  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 11:16:30.234041  721133 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:16:30.234065  721133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:16:30.234131  721133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:16:30.246273  721133 addons.go:239] Setting addon default-storageclass=true in "no-preload-258179"
	I1123 11:16:30.246321  721133 host.go:66] Checking if "no-preload-258179" exists ...
	I1123 11:16:30.246744  721133 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:16:30.273535  721133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:16:30.290470  721133 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:16:30.290493  721133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:16:30.290557  721133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:16:30.325885  721133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:16:30.743523  721133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:16:30.802388  721133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:16:30.879477  721133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 11:16:30.879639  721133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:16:32.119657  721133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376051228s)
	I1123 11:16:32.119758  721133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.317302668s)
	I1123 11:16:32.119824  721133 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.240096474s)
	I1123 11:16:32.120765  721133 node_ready.go:35] waiting up to 6m0s for node "no-preload-258179" to be "Ready" ...
	I1123 11:16:32.119840  721133 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.24018285s)
	I1123 11:16:32.121104  721133 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 11:16:32.248332  721133 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 11:16:31.123281  724363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 11:16:31.123426  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:31.123514  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-715679 minikube.k8s.io/updated_at=2025_11_23T11_16_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=embed-certs-715679 minikube.k8s.io/primary=true
	I1123 11:16:31.597496  724363 ops.go:34] apiserver oom_adj: -16
	I1123 11:16:31.597609  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:32.098042  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:32.598238  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:33.097803  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:33.597722  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:34.098533  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:34.597719  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:35.098286  724363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:16:35.414806  724363 kubeadm.go:1114] duration metric: took 4.291441302s to wait for elevateKubeSystemPrivileges
	I1123 11:16:35.414836  724363 kubeadm.go:403] duration metric: took 27.854944991s to StartCluster
	I1123 11:16:35.414853  724363 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:35.414915  724363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:16:35.416252  724363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:16:35.416475  724363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:16:35.416656  724363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 11:16:35.416919  724363 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:16:35.416952  724363 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:16:35.417011  724363 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-715679"
	I1123 11:16:35.417028  724363 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-715679"
	I1123 11:16:35.417048  724363 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:16:35.417585  724363 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:16:35.418155  724363 addons.go:70] Setting default-storageclass=true in profile "embed-certs-715679"
	I1123 11:16:35.418182  724363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-715679"
	I1123 11:16:35.418467  724363 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:16:35.423194  724363 out.go:179] * Verifying Kubernetes components...
	I1123 11:16:35.431429  724363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:16:35.449032  724363 addons.go:239] Setting addon default-storageclass=true in "embed-certs-715679"
	I1123 11:16:35.449083  724363 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:16:35.452869  724363 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:16:35.479060  724363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:16:35.479346  724363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:16:35.479366  724363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:16:35.479430  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:35.482120  724363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:16:35.482141  724363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:16:35.482207  724363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:16:35.519036  724363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:16:35.528545  724363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:16:35.786946  724363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 11:16:35.822991  724363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:16:35.837487  724363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:16:35.865125  724363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:16:32.251429  721133 addons.go:530] duration metric: took 2.059851806s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 11:16:32.630467  721133 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-258179" context rescaled to 1 replicas
	W1123 11:16:34.129605  721133 node_ready.go:57] node "no-preload-258179" has "Ready":"False" status (will retry)
	I1123 11:16:36.384499  724363 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 11:16:36.730659  724363 node_ready.go:35] waiting up to 6m0s for node "embed-certs-715679" to be "Ready" ...
	I1123 11:16:36.751210  724363 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 11:16:36.753929  724363 addons.go:530] duration metric: took 1.336967385s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 11:16:36.889388  724363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-715679" context rescaled to 1 replicas
	W1123 11:16:38.733444  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:40.734228  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:36.623878  721133 node_ready.go:57] node "no-preload-258179" has "Ready":"False" status (will retry)
	W1123 11:16:38.624924  721133 node_ready.go:57] node "no-preload-258179" has "Ready":"False" status (will retry)
	W1123 11:16:41.124264  721133 node_ready.go:57] node "no-preload-258179" has "Ready":"False" status (will retry)
	W1123 11:16:43.233711  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:45.236094  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:43.623560  721133 node_ready.go:57] node "no-preload-258179" has "Ready":"False" status (will retry)
	W1123 11:16:46.123668  721133 node_ready.go:57] node "no-preload-258179" has "Ready":"False" status (will retry)
	I1123 11:16:47.124528  721133 node_ready.go:49] node "no-preload-258179" is "Ready"
	I1123 11:16:47.124552  721133 node_ready.go:38] duration metric: took 15.003768904s for node "no-preload-258179" to be "Ready" ...
	I1123 11:16:47.124564  721133 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:16:47.124631  721133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:16:47.144843  721133 api_server.go:72] duration metric: took 16.953600683s to wait for apiserver process to appear ...
	I1123 11:16:47.144867  721133 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:16:47.144886  721133 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 11:16:47.161400  721133 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 11:16:47.163252  721133 api_server.go:141] control plane version: v1.34.1
	I1123 11:16:47.163274  721133 api_server.go:131] duration metric: took 18.401455ms to wait for apiserver health ...
	I1123 11:16:47.163283  721133 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:16:47.170936  721133 system_pods.go:59] 8 kube-system pods found
	I1123 11:16:47.171032  721133 system_pods.go:61] "coredns-66bc5c9577-6xhlc" [78882ceb-6384-470c-b326-06a53eb9d178] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:16:47.171055  721133 system_pods.go:61] "etcd-no-preload-258179" [2b5348e5-76ed-42f1-9a5b-ac7f7568408a] Running
	I1123 11:16:47.171086  721133 system_pods.go:61] "kindnet-zbrwj" [2a14a616-4705-45ee-9906-40c727e4de80] Running
	I1123 11:16:47.171116  721133 system_pods.go:61] "kube-apiserver-no-preload-258179" [0a835193-4beb-43a4-a975-739f01a654be] Running
	I1123 11:16:47.171134  721133 system_pods.go:61] "kube-controller-manager-no-preload-258179" [1a08be14-37ef-4d98-a086-8a3db3edd9d4] Running
	I1123 11:16:47.171156  721133 system_pods.go:61] "kube-proxy-twzmv" [f4b947ff-ebeb-4bdd-8e56-70af47c2527b] Running
	I1123 11:16:47.171176  721133 system_pods.go:61] "kube-scheduler-no-preload-258179" [d43cdf01-9f1a-4384-965d-0e6573d232e4] Running
	I1123 11:16:47.171212  721133 system_pods.go:61] "storage-provisioner" [e9e3c249-589b-4f1c-ac62-f4d0107e35d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:16:47.171235  721133 system_pods.go:74] duration metric: took 7.946153ms to wait for pod list to return data ...
	I1123 11:16:47.171258  721133 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:16:47.197696  721133 default_sa.go:45] found service account: "default"
	I1123 11:16:47.197770  721133 default_sa.go:55] duration metric: took 26.484983ms for default service account to be created ...
	I1123 11:16:47.197795  721133 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:16:47.274924  721133 system_pods.go:86] 8 kube-system pods found
	I1123 11:16:47.275009  721133 system_pods.go:89] "coredns-66bc5c9577-6xhlc" [78882ceb-6384-470c-b326-06a53eb9d178] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:16:47.275032  721133 system_pods.go:89] "etcd-no-preload-258179" [2b5348e5-76ed-42f1-9a5b-ac7f7568408a] Running
	I1123 11:16:47.275056  721133 system_pods.go:89] "kindnet-zbrwj" [2a14a616-4705-45ee-9906-40c727e4de80] Running
	I1123 11:16:47.275096  721133 system_pods.go:89] "kube-apiserver-no-preload-258179" [0a835193-4beb-43a4-a975-739f01a654be] Running
	I1123 11:16:47.275115  721133 system_pods.go:89] "kube-controller-manager-no-preload-258179" [1a08be14-37ef-4d98-a086-8a3db3edd9d4] Running
	I1123 11:16:47.275135  721133 system_pods.go:89] "kube-proxy-twzmv" [f4b947ff-ebeb-4bdd-8e56-70af47c2527b] Running
	I1123 11:16:47.275172  721133 system_pods.go:89] "kube-scheduler-no-preload-258179" [d43cdf01-9f1a-4384-965d-0e6573d232e4] Running
	I1123 11:16:47.275197  721133 system_pods.go:89] "storage-provisioner" [e9e3c249-589b-4f1c-ac62-f4d0107e35d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:16:47.275229  721133 retry.go:31] will retry after 239.224896ms: missing components: kube-dns
	I1123 11:16:47.517587  721133 system_pods.go:86] 8 kube-system pods found
	I1123 11:16:47.517627  721133 system_pods.go:89] "coredns-66bc5c9577-6xhlc" [78882ceb-6384-470c-b326-06a53eb9d178] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:16:47.517636  721133 system_pods.go:89] "etcd-no-preload-258179" [2b5348e5-76ed-42f1-9a5b-ac7f7568408a] Running
	I1123 11:16:47.517642  721133 system_pods.go:89] "kindnet-zbrwj" [2a14a616-4705-45ee-9906-40c727e4de80] Running
	I1123 11:16:47.517649  721133 system_pods.go:89] "kube-apiserver-no-preload-258179" [0a835193-4beb-43a4-a975-739f01a654be] Running
	I1123 11:16:47.517654  721133 system_pods.go:89] "kube-controller-manager-no-preload-258179" [1a08be14-37ef-4d98-a086-8a3db3edd9d4] Running
	I1123 11:16:47.517658  721133 system_pods.go:89] "kube-proxy-twzmv" [f4b947ff-ebeb-4bdd-8e56-70af47c2527b] Running
	I1123 11:16:47.517662  721133 system_pods.go:89] "kube-scheduler-no-preload-258179" [d43cdf01-9f1a-4384-965d-0e6573d232e4] Running
	I1123 11:16:47.517668  721133 system_pods.go:89] "storage-provisioner" [e9e3c249-589b-4f1c-ac62-f4d0107e35d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:16:47.517684  721133 retry.go:31] will retry after 336.68521ms: missing components: kube-dns
	I1123 11:16:47.858450  721133 system_pods.go:86] 8 kube-system pods found
	I1123 11:16:47.858490  721133 system_pods.go:89] "coredns-66bc5c9577-6xhlc" [78882ceb-6384-470c-b326-06a53eb9d178] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:16:47.858498  721133 system_pods.go:89] "etcd-no-preload-258179" [2b5348e5-76ed-42f1-9a5b-ac7f7568408a] Running
	I1123 11:16:47.858504  721133 system_pods.go:89] "kindnet-zbrwj" [2a14a616-4705-45ee-9906-40c727e4de80] Running
	I1123 11:16:47.858511  721133 system_pods.go:89] "kube-apiserver-no-preload-258179" [0a835193-4beb-43a4-a975-739f01a654be] Running
	I1123 11:16:47.858515  721133 system_pods.go:89] "kube-controller-manager-no-preload-258179" [1a08be14-37ef-4d98-a086-8a3db3edd9d4] Running
	I1123 11:16:47.858519  721133 system_pods.go:89] "kube-proxy-twzmv" [f4b947ff-ebeb-4bdd-8e56-70af47c2527b] Running
	I1123 11:16:47.858524  721133 system_pods.go:89] "kube-scheduler-no-preload-258179" [d43cdf01-9f1a-4384-965d-0e6573d232e4] Running
	I1123 11:16:47.858532  721133 system_pods.go:89] "storage-provisioner" [e9e3c249-589b-4f1c-ac62-f4d0107e35d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:16:47.858553  721133 retry.go:31] will retry after 435.12189ms: missing components: kube-dns
	I1123 11:16:48.297531  721133 system_pods.go:86] 8 kube-system pods found
	I1123 11:16:48.297564  721133 system_pods.go:89] "coredns-66bc5c9577-6xhlc" [78882ceb-6384-470c-b326-06a53eb9d178] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:16:48.297572  721133 system_pods.go:89] "etcd-no-preload-258179" [2b5348e5-76ed-42f1-9a5b-ac7f7568408a] Running
	I1123 11:16:48.297578  721133 system_pods.go:89] "kindnet-zbrwj" [2a14a616-4705-45ee-9906-40c727e4de80] Running
	I1123 11:16:48.297582  721133 system_pods.go:89] "kube-apiserver-no-preload-258179" [0a835193-4beb-43a4-a975-739f01a654be] Running
	I1123 11:16:48.297588  721133 system_pods.go:89] "kube-controller-manager-no-preload-258179" [1a08be14-37ef-4d98-a086-8a3db3edd9d4] Running
	I1123 11:16:48.297603  721133 system_pods.go:89] "kube-proxy-twzmv" [f4b947ff-ebeb-4bdd-8e56-70af47c2527b] Running
	I1123 11:16:48.297608  721133 system_pods.go:89] "kube-scheduler-no-preload-258179" [d43cdf01-9f1a-4384-965d-0e6573d232e4] Running
	I1123 11:16:48.297612  721133 system_pods.go:89] "storage-provisioner" [e9e3c249-589b-4f1c-ac62-f4d0107e35d7] Running
	I1123 11:16:48.297620  721133 system_pods.go:126] duration metric: took 1.099805842s to wait for k8s-apps to be running ...
	I1123 11:16:48.297627  721133 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:16:48.297688  721133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:16:48.311012  721133 system_svc.go:56] duration metric: took 13.373141ms WaitForService to wait for kubelet
	I1123 11:16:48.311058  721133 kubeadm.go:587] duration metric: took 18.119813558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:16:48.311079  721133 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:16:48.314041  721133 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:16:48.314073  721133 node_conditions.go:123] node cpu capacity is 2
	I1123 11:16:48.314087  721133 node_conditions.go:105] duration metric: took 3.002724ms to run NodePressure ...
	I1123 11:16:48.314099  721133 start.go:242] waiting for startup goroutines ...
	I1123 11:16:48.314106  721133 start.go:247] waiting for cluster config update ...
	I1123 11:16:48.314118  721133 start.go:256] writing updated cluster config ...
	I1123 11:16:48.314398  721133 ssh_runner.go:195] Run: rm -f paused
	I1123 11:16:48.318306  721133 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:16:48.322413  721133 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6xhlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.328441  721133 pod_ready.go:94] pod "coredns-66bc5c9577-6xhlc" is "Ready"
	I1123 11:16:49.328469  721133 pod_ready.go:86] duration metric: took 1.006025675s for pod "coredns-66bc5c9577-6xhlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.331187  721133 pod_ready.go:83] waiting for pod "etcd-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.335726  721133 pod_ready.go:94] pod "etcd-no-preload-258179" is "Ready"
	I1123 11:16:49.335752  721133 pod_ready.go:86] duration metric: took 4.542368ms for pod "etcd-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.338290  721133 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.343408  721133 pod_ready.go:94] pod "kube-apiserver-no-preload-258179" is "Ready"
	I1123 11:16:49.343440  721133 pod_ready.go:86] duration metric: took 5.120763ms for pod "kube-apiserver-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.345915  721133 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.526247  721133 pod_ready.go:94] pod "kube-controller-manager-no-preload-258179" is "Ready"
	I1123 11:16:49.526278  721133 pod_ready.go:86] duration metric: took 180.334414ms for pod "kube-controller-manager-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:49.726026  721133 pod_ready.go:83] waiting for pod "kube-proxy-twzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:50.125998  721133 pod_ready.go:94] pod "kube-proxy-twzmv" is "Ready"
	I1123 11:16:50.126025  721133 pod_ready.go:86] duration metric: took 399.972323ms for pod "kube-proxy-twzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:50.326162  721133 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:50.726467  721133 pod_ready.go:94] pod "kube-scheduler-no-preload-258179" is "Ready"
	I1123 11:16:50.726495  721133 pod_ready.go:86] duration metric: took 400.307833ms for pod "kube-scheduler-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:16:50.726506  721133 pod_ready.go:40] duration metric: took 2.408171719s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:16:50.789811  721133 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:16:50.793328  721133 out.go:179] * Done! kubectl is now configured to use "no-preload-258179" cluster and "default" namespace by default
	W1123 11:16:47.733685  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:50.233856  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:52.734290  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:54.734441  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:57.233535  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:16:59.733622  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 11:16:47 no-preload-258179 crio[837]: time="2025-11-23T11:16:47.179091417Z" level=info msg="Created container 744aab34984043baf757f6e2d23a4e2589a5fdf836337f8e02d9c7770db59353: kube-system/coredns-66bc5c9577-6xhlc/coredns" id=87457255-0a14-4da0-9560-db297030420d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:16:47 no-preload-258179 crio[837]: time="2025-11-23T11:16:47.180110891Z" level=info msg="Starting container: 744aab34984043baf757f6e2d23a4e2589a5fdf836337f8e02d9c7770db59353" id=3b2f058e-bcbd-4193-b1e9-d6c31a7f3dff name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:16:47 no-preload-258179 crio[837]: time="2025-11-23T11:16:47.182169737Z" level=info msg="Started container" PID=2486 containerID=744aab34984043baf757f6e2d23a4e2589a5fdf836337f8e02d9c7770db59353 description=kube-system/coredns-66bc5c9577-6xhlc/coredns id=3b2f058e-bcbd-4193-b1e9-d6c31a7f3dff name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a0ffeb9cd87d5efbcdeeb4fda7c2eca3e26102f893a410c30d1acfd0e696061
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.318979411Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f6075ea9-de5e-4575-af06-083d37c52cce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.319061079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.325685144Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2066a3c4e5b9fb03177f1eb036d13e46e40517b01a214e38f13a706356816461 UID:4f4d26d7-32a3-4ce1-b0ab-085f6459a353 NetNS:/var/run/netns/f93e5367-035a-402a-a60c-7bbb41b01a57 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028063d8}] Aliases:map[]}"
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.32572499Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.335630162Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2066a3c4e5b9fb03177f1eb036d13e46e40517b01a214e38f13a706356816461 UID:4f4d26d7-32a3-4ce1-b0ab-085f6459a353 NetNS:/var/run/netns/f93e5367-035a-402a-a60c-7bbb41b01a57 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028063d8}] Aliases:map[]}"
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.335791734Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.338614087Z" level=info msg="Ran pod sandbox 2066a3c4e5b9fb03177f1eb036d13e46e40517b01a214e38f13a706356816461 with infra container: default/busybox/POD" id=f6075ea9-de5e-4575-af06-083d37c52cce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.341612085Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9e8ba0ff-892e-4ef1-ab34-0d0ae9397efd name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.341817268Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9e8ba0ff-892e-4ef1-ab34-0d0ae9397efd name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.341923831Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9e8ba0ff-892e-4ef1-ab34-0d0ae9397efd name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.343153272Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=14f02636-8975-462d-99eb-b22fe435bea8 name=/runtime.v1.ImageService/PullImage
	Nov 23 11:16:51 no-preload-258179 crio[837]: time="2025-11-23T11:16:51.348610569Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.397533564Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=14f02636-8975-462d-99eb-b22fe435bea8 name=/runtime.v1.ImageService/PullImage
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.39841143Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6782378a-8062-4a64-ace6-99b040c7d2c7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.402104979Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=048c0655-cab7-4c58-9d29-18dafccacfb1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.408000876Z" level=info msg="Creating container: default/busybox/busybox" id=da428e95-b698-40cd-a1dd-3e3d77499f1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.40811763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.413004844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.413713196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.432039347Z" level=info msg="Created container a1acbd9a925cc6dcd023de51996c30cf35247ec06fc351a425e1916ef0df2ed4: default/busybox/busybox" id=da428e95-b698-40cd-a1dd-3e3d77499f1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.434010503Z" level=info msg="Starting container: a1acbd9a925cc6dcd023de51996c30cf35247ec06fc351a425e1916ef0df2ed4" id=80b3d27d-8898-4f10-ab5e-70c50032142d name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:16:53 no-preload-258179 crio[837]: time="2025-11-23T11:16:53.437812345Z" level=info msg="Started container" PID=2543 containerID=a1acbd9a925cc6dcd023de51996c30cf35247ec06fc351a425e1916ef0df2ed4 description=default/busybox/busybox id=80b3d27d-8898-4f10-ab5e-70c50032142d name=/runtime.v1.RuntimeService/StartContainer sandboxID=2066a3c4e5b9fb03177f1eb036d13e46e40517b01a214e38f13a706356816461
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a1acbd9a925cc       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   2066a3c4e5b9f       busybox                                     default
	744aab3498404       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   1a0ffeb9cd87d       coredns-66bc5c9577-6xhlc                    kube-system
	03530744815d9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   65f82594be22e       storage-provisioner                         kube-system
	fc83037b1061e       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   863999ac18b4a       kindnet-zbrwj                               kube-system
	275b252cb38d3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   6f465899aa3af       kube-proxy-twzmv                            kube-system
	df65373d63800       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      48 seconds ago      Running             kube-apiserver            0                   47b9dfde3322b       kube-apiserver-no-preload-258179            kube-system
	f75b663ae4b43       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      48 seconds ago      Running             kube-scheduler            0                   53d027892cd8e       kube-scheduler-no-preload-258179            kube-system
	6309f4e718aa8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      48 seconds ago      Running             etcd                      0                   e5e3fdf421bde       etcd-no-preload-258179                      kube-system
	5a3a00343039d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      48 seconds ago      Running             kube-controller-manager   0                   4cdd544f057c7       kube-controller-manager-no-preload-258179   kube-system
	
	
	==> coredns [744aab34984043baf757f6e2d23a4e2589a5fdf836337f8e02d9c7770db59353] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57225 - 56067 "HINFO IN 2399933757613941257.102647655291409433. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012953257s
	
	
	==> describe nodes <==
	Name:               no-preload-258179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-258179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-258179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_16_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:16:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-258179
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:16:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:16:56 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:16:56 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:16:56 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:16:56 +0000   Sun, 23 Nov 2025 11:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-258179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                31cf968a-925d-4e78-a2a3-d0d59827b56c
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-6xhlc                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-258179                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-zbrwj                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-no-preload-258179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-258179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-twzmv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-no-preload-258179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           33s                node-controller  Node no-preload-258179 event: Registered Node no-preload-258179 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-258179 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:55] overlayfs: idmapped layers are currently not supported
	[Nov23 10:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6309f4e718aa8607af4c941ae9e544dc9a6b6cdcd051e993a3a05c2db5b76e4b] <==
	{"level":"warn","ts":"2025-11-23T11:16:18.599417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.669360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.709102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.734884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.753918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.786979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.796917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.820237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.832578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.867638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.893234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.926909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.949852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.974094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:18.998741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.026385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.061965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.094823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.129616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.147147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.183196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.205591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.245805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.292332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:19.455076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37500","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:17:02 up  3:59,  0 user,  load average: 3.58, 3.54, 2.90
	Linux no-preload-258179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc83037b1061e326746f1c835c7437520bae9771c72ee9745fd97390249edc39] <==
	I1123 11:16:36.169301       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:16:36.169632       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:16:36.169763       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:16:36.169781       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:16:36.169790       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:16:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:16:36.458516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:16:36.458543       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:16:36.458552       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:16:36.458927       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 11:16:36.661377       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:16:36.664927       1 metrics.go:72] Registering metrics
	I1123 11:16:36.665050       1 controller.go:711] "Syncing nftables rules"
	I1123 11:16:46.377530       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:16:46.377590       1 main.go:301] handling current node
	I1123 11:16:56.370225       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:16:56.370260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [df65373d638000f4440ffd4752588d7ff8c66b523621ca9b2f10cd1bb42271a4] <==
	I1123 11:16:21.893823       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 11:16:21.893837       1 policy_source.go:240] refreshing policies
	I1123 11:16:21.934678       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:16:21.966628       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:21.976531       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 11:16:22.029143       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:16:22.062507       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:22.144692       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 11:16:22.145885       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:16:22.215510       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 11:16:22.223810       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:16:24.251002       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:16:24.317033       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:16:24.432755       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 11:16:24.442286       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 11:16:24.443490       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:16:24.453802       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:16:24.775273       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:16:25.584476       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:16:25.623374       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 11:16:25.652868       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 11:16:30.581544       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:30.674676       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:30.723302       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 11:16:31.049137       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5a3a00343039d971f7f3e3dfa8d3fffb5e29445ca01adc52355fe400b4a585b8] <==
	I1123 11:16:29.800665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 11:16:29.800846       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:16:29.801916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 11:16:29.803927       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-258179" podCIDRs=["10.244.0.0/24"]
	I1123 11:16:29.811820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:16:29.811921       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:16:29.811953       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:16:29.812109       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:16:29.812660       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:16:29.813165       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 11:16:29.813391       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 11:16:29.813485       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 11:16:29.813727       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 11:16:29.813757       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:16:29.813788       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 11:16:29.814775       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 11:16:29.814999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 11:16:29.815042       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:16:29.815541       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:16:29.818372       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 11:16:29.818446       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:16:29.828237       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:16:29.833287       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 11:16:29.835982       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:16:49.763621       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [275b252cb38d3b2266570085b3f81d7316018b1cf5688c0b1011b47e9d5a5b5c] <==
	I1123 11:16:32.181631       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:16:32.352218       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:16:32.456873       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:16:32.456911       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:16:32.456978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:16:32.481223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:16:32.481307       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:16:32.485288       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:16:32.485661       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:16:32.485687       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:16:32.487415       1 config.go:200] "Starting service config controller"
	I1123 11:16:32.487463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:16:32.487483       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:16:32.487488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:16:32.487499       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:16:32.487513       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:16:32.490008       1 config.go:309] "Starting node config controller"
	I1123 11:16:32.490022       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:16:32.490029       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:16:32.587576       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:16:32.587609       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 11:16:32.587654       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f75b663ae4b438f9b518be912302265848881f1dd180802ce7045215fd32ce0a] <==
	I1123 11:16:18.281808       1 serving.go:386] Generated self-signed cert in-memory
	W1123 11:16:23.682898       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 11:16:23.683016       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 11:16:23.683051       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 11:16:23.683080       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 11:16:23.720912       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:16:23.731408       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:16:23.734451       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:16:23.734561       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:16:23.743785       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:16:23.734579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 11:16:23.761901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 11:16:24.844017       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:16:29 no-preload-258179 kubelet[2013]: I1123 11:16:29.885590    2013 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: E1123 11:16:31.021113    2013 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-258179\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-258179' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: E1123 11:16:31.021192    2013 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-zbrwj\" is forbidden: User \"system:node:no-preload-258179\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-258179' and this object" podUID="2a14a616-4705-45ee-9906-40c727e4de80" pod="kube-system/kindnet-zbrwj"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.076357    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a14a616-4705-45ee-9906-40c727e4de80-lib-modules\") pod \"kindnet-zbrwj\" (UID: \"2a14a616-4705-45ee-9906-40c727e4de80\") " pod="kube-system/kindnet-zbrwj"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.076414    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2a14a616-4705-45ee-9906-40c727e4de80-cni-cfg\") pod \"kindnet-zbrwj\" (UID: \"2a14a616-4705-45ee-9906-40c727e4de80\") " pod="kube-system/kindnet-zbrwj"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.076436    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a14a616-4705-45ee-9906-40c727e4de80-xtables-lock\") pod \"kindnet-zbrwj\" (UID: \"2a14a616-4705-45ee-9906-40c727e4de80\") " pod="kube-system/kindnet-zbrwj"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.076456    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltwh7\" (UniqueName: \"kubernetes.io/projected/2a14a616-4705-45ee-9906-40c727e4de80-kube-api-access-ltwh7\") pod \"kindnet-zbrwj\" (UID: \"2a14a616-4705-45ee-9906-40c727e4de80\") " pod="kube-system/kindnet-zbrwj"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.183580    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcs4p\" (UniqueName: \"kubernetes.io/projected/f4b947ff-ebeb-4bdd-8e56-70af47c2527b-kube-api-access-gcs4p\") pod \"kube-proxy-twzmv\" (UID: \"f4b947ff-ebeb-4bdd-8e56-70af47c2527b\") " pod="kube-system/kube-proxy-twzmv"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.183654    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4b947ff-ebeb-4bdd-8e56-70af47c2527b-kube-proxy\") pod \"kube-proxy-twzmv\" (UID: \"f4b947ff-ebeb-4bdd-8e56-70af47c2527b\") " pod="kube-system/kube-proxy-twzmv"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.183685    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b947ff-ebeb-4bdd-8e56-70af47c2527b-lib-modules\") pod \"kube-proxy-twzmv\" (UID: \"f4b947ff-ebeb-4bdd-8e56-70af47c2527b\") " pod="kube-system/kube-proxy-twzmv"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.183716    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b947ff-ebeb-4bdd-8e56-70af47c2527b-xtables-lock\") pod \"kube-proxy-twzmv\" (UID: \"f4b947ff-ebeb-4bdd-8e56-70af47c2527b\") " pod="kube-system/kube-proxy-twzmv"
	Nov 23 11:16:31 no-preload-258179 kubelet[2013]: I1123 11:16:31.998517    2013 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:16:32 no-preload-258179 kubelet[2013]: W1123 11:16:32.050121    2013 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/crio-6f465899aa3af47882fa149d1284c69679d15319c76fd3c60dc4c8321ca0022c WatchSource:0}: Error finding container 6f465899aa3af47882fa149d1284c69679d15319c76fd3c60dc4c8321ca0022c: Status 404 returned error can't find the container with id 6f465899aa3af47882fa149d1284c69679d15319c76fd3c60dc4c8321ca0022c
	Nov 23 11:16:32 no-preload-258179 kubelet[2013]: W1123 11:16:32.210820    2013 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/crio-863999ac18b4aa9d548f24ebd02071653a58cca85ce4bae3f9b67e695b6331cc WatchSource:0}: Error finding container 863999ac18b4aa9d548f24ebd02071653a58cca85ce4bae3f9b67e695b6331cc: Status 404 returned error can't find the container with id 863999ac18b4aa9d548f24ebd02071653a58cca85ce4bae3f9b67e695b6331cc
	Nov 23 11:16:35 no-preload-258179 kubelet[2013]: I1123 11:16:35.981068    2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-twzmv" podStartSLOduration=5.98103853 podStartE2EDuration="5.98103853s" podCreationTimestamp="2025-11-23 11:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:16:33.078734455 +0000 UTC m=+7.556902928" watchObservedRunningTime="2025-11-23 11:16:35.98103853 +0000 UTC m=+10.459207003"
	Nov 23 11:16:46 no-preload-258179 kubelet[2013]: I1123 11:16:46.711437    2013 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 11:16:46 no-preload-258179 kubelet[2013]: I1123 11:16:46.739041    2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zbrwj" podStartSLOduration=13.016337452 podStartE2EDuration="16.739001519s" podCreationTimestamp="2025-11-23 11:16:30 +0000 UTC" firstStartedPulling="2025-11-23 11:16:32.216033478 +0000 UTC m=+6.694201943" lastFinishedPulling="2025-11-23 11:16:35.938697545 +0000 UTC m=+10.416866010" observedRunningTime="2025-11-23 11:16:37.108082961 +0000 UTC m=+11.586251443" watchObservedRunningTime="2025-11-23 11:16:46.739001519 +0000 UTC m=+21.217169992"
	Nov 23 11:16:46 no-preload-258179 kubelet[2013]: I1123 11:16:46.844638    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e9e3c249-589b-4f1c-ac62-f4d0107e35d7-tmp\") pod \"storage-provisioner\" (UID: \"e9e3c249-589b-4f1c-ac62-f4d0107e35d7\") " pod="kube-system/storage-provisioner"
	Nov 23 11:16:46 no-preload-258179 kubelet[2013]: I1123 11:16:46.844688    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg24p\" (UniqueName: \"kubernetes.io/projected/78882ceb-6384-470c-b326-06a53eb9d178-kube-api-access-dg24p\") pod \"coredns-66bc5c9577-6xhlc\" (UID: \"78882ceb-6384-470c-b326-06a53eb9d178\") " pod="kube-system/coredns-66bc5c9577-6xhlc"
	Nov 23 11:16:46 no-preload-258179 kubelet[2013]: I1123 11:16:46.844714    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkzr\" (UniqueName: \"kubernetes.io/projected/e9e3c249-589b-4f1c-ac62-f4d0107e35d7-kube-api-access-tpkzr\") pod \"storage-provisioner\" (UID: \"e9e3c249-589b-4f1c-ac62-f4d0107e35d7\") " pod="kube-system/storage-provisioner"
	Nov 23 11:16:46 no-preload-258179 kubelet[2013]: I1123 11:16:46.844735    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78882ceb-6384-470c-b326-06a53eb9d178-config-volume\") pod \"coredns-66bc5c9577-6xhlc\" (UID: \"78882ceb-6384-470c-b326-06a53eb9d178\") " pod="kube-system/coredns-66bc5c9577-6xhlc"
	Nov 23 11:16:47 no-preload-258179 kubelet[2013]: W1123 11:16:47.095759    2013 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/crio-1a0ffeb9cd87d5efbcdeeb4fda7c2eca3e26102f893a410c30d1acfd0e696061 WatchSource:0}: Error finding container 1a0ffeb9cd87d5efbcdeeb4fda7c2eca3e26102f893a410c30d1acfd0e696061: Status 404 returned error can't find the container with id 1a0ffeb9cd87d5efbcdeeb4fda7c2eca3e26102f893a410c30d1acfd0e696061
	Nov 23 11:16:48 no-preload-258179 kubelet[2013]: I1123 11:16:48.178736    2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.178716611 podStartE2EDuration="16.178716611s" podCreationTimestamp="2025-11-23 11:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:16:48.164418037 +0000 UTC m=+22.642586510" watchObservedRunningTime="2025-11-23 11:16:48.178716611 +0000 UTC m=+22.656885075"
	Nov 23 11:16:48 no-preload-258179 kubelet[2013]: I1123 11:16:48.179366    2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6xhlc" podStartSLOduration=17.179355135 podStartE2EDuration="17.179355135s" podCreationTimestamp="2025-11-23 11:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:16:48.177648635 +0000 UTC m=+22.655817108" watchObservedRunningTime="2025-11-23 11:16:48.179355135 +0000 UTC m=+22.657523599"
	Nov 23 11:16:51 no-preload-258179 kubelet[2013]: I1123 11:16:51.069466    2013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dflxc\" (UniqueName: \"kubernetes.io/projected/4f4d26d7-32a3-4ce1-b0ab-085f6459a353-kube-api-access-dflxc\") pod \"busybox\" (UID: \"4f4d26d7-32a3-4ce1-b0ab-085f6459a353\") " pod="default/busybox"
	
	
	==> storage-provisioner [03530744815d9921709126a28a64687df1fdab499293303cbf18ef2f32ba510e] <==
	I1123 11:16:47.132209       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:16:47.154897       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:16:47.154942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:16:47.165947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:47.200979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:16:47.206109       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:16:47.206782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ee794a4-039d-48f2-a5ae-7703aaab1a1e", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-258179_193e0db9-19db-40c0-8c48-8f4e0adc8f68 became leader
	I1123 11:16:47.207055       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-258179_193e0db9-19db-40c0-8c48-8f4e0adc8f68!
	W1123 11:16:47.268744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:47.279168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:16:47.307993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-258179_193e0db9-19db-40c0-8c48-8f4e0adc8f68!
	W1123 11:16:49.283310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:49.288187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:51.291333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:51.295914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:53.299415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:53.303779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:55.307274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:55.311716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:57.315816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:57.322605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:59.325855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:16:59.330290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:01.334900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:01.342355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-258179 -n no-preload-258179
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-258179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (366.684533ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:17:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-715679 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-715679 describe deploy/metrics-server -n kube-system: exit status 1 (142.730185ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-715679 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-715679
helpers_test.go:243: (dbg) docker inspect embed-certs-715679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944",
	        "Created": "2025-11-23T11:15:57.805460889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 724929,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:15:57.909893898Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/hosts",
	        "LogPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944-json.log",
	        "Name": "/embed-certs-715679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-715679:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-715679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944",
	                "LowerDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-715679",
	                "Source": "/var/lib/docker/volumes/embed-certs-715679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-715679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-715679",
	                "name.minikube.sigs.k8s.io": "embed-certs-715679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c793690a67e684a716a1c9ad99a1d4742e27d3f159d73766506a0e611ed498f",
	            "SandboxKey": "/var/run/docker/netns/8c793690a67e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-715679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:3d:b4:55:f2:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9dc6254b6af11e97f0c613269fd92518cae572b3a5313c8e4edd68d21062116b",
	                    "EndpointID": "481d0347a7abe6973457b9ea30ba57991758326f3cbcc2e92d919a4882de56d4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-715679",
	                        "bf3b5a2f915e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-715679 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-715679 logs -n 25: (1.533041089s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p pause-851396                                                                                                                                                                                                                               │ pause-851396             │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:11 UTC │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:11 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p force-systemd-env-613417                                                                                                                                                                                                                   │ force-systemd-env-613417 │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p cert-options-700578 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ cert-options-700578 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ ssh     │ -p cert-options-700578 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p cert-options-700578                                                                                                                                                                                                                        │ cert-options-700578      │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:13 UTC │                     │
	│ stop    │ -p old-k8s-version-378086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-378086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179        │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:16 UTC │
	│ delete  │ -p cert-expiration-629387                                                                                                                                                                                                                     │ cert-expiration-629387   │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179        │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p no-preload-258179 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-258179        │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179        │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179        │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679       │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:17:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:17:15.563356  728764 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:17:15.563477  728764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:17:15.563486  728764 out.go:374] Setting ErrFile to fd 2...
	I1123 11:17:15.563492  728764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:17:15.563759  728764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:17:15.564117  728764 out.go:368] Setting JSON to false
	I1123 11:17:15.564978  728764 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14385,"bootTime":1763882251,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:17:15.565093  728764 start.go:143] virtualization:  
	I1123 11:17:15.568328  728764 out.go:179] * [no-preload-258179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:17:15.572120  728764 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:17:15.572270  728764 notify.go:221] Checking for updates...
	I1123 11:17:15.577980  728764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:17:15.580960  728764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:15.583857  728764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:17:15.586730  728764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:17:15.589595  728764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:17:15.593001  728764 config.go:182] Loaded profile config "no-preload-258179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:15.593655  728764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:17:15.619712  728764 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:17:15.619834  728764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:17:15.688988  728764 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:17:15.679415009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:17:15.689126  728764 docker.go:319] overlay module found
	I1123 11:17:15.692415  728764 out.go:179] * Using the docker driver based on existing profile
	I1123 11:17:15.695292  728764 start.go:309] selected driver: docker
	I1123 11:17:15.695314  728764 start.go:927] validating driver "docker" against &{Name:no-preload-258179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-258179 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:15.695409  728764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:17:15.696133  728764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:17:15.753071  728764 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:17:15.744170448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:17:15.753463  728764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:17:15.753498  728764 cni.go:84] Creating CNI manager for ""
	I1123 11:17:15.753559  728764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:17:15.753602  728764 start.go:353] cluster config:
	{Name:no-preload-258179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-258179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:15.758616  728764 out.go:179] * Starting "no-preload-258179" primary control-plane node in "no-preload-258179" cluster
	I1123 11:17:15.761401  728764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:17:15.764463  728764 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:17:15.768238  728764 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:17:15.768396  728764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/config.json ...
	I1123 11:17:15.768740  728764 cache.go:107] acquiring lock: {Name:mk6b49d4e42bab9b7bfa0e2eb79fe3097bbd8e9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.768827  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 11:17:15.768842  728764 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.838µs
	I1123 11:17:15.768855  728764 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 11:17:15.768873  728764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:17:15.769058  728764 cache.go:107] acquiring lock: {Name:mk78b6f6998312b92a9cded2805d1a9f04b95cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.769131  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 11:17:15.769144  728764 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 91.489µs
	I1123 11:17:15.769152  728764 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 11:17:15.769170  728764 cache.go:107] acquiring lock: {Name:mkb0b2e2aa1ad0765dbc78a44285393cf50c5901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.769221  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 11:17:15.769231  728764 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 62.583µs
	I1123 11:17:15.769238  728764 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 11:17:15.769254  728764 cache.go:107] acquiring lock: {Name:mk3df6fc55110a96283128d28bb5f2f565c446b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.769286  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 11:17:15.769296  728764 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.635µs
	I1123 11:17:15.769302  728764 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 11:17:15.769312  728764 cache.go:107] acquiring lock: {Name:mk227cbb290f66141e8bcdcf285a2cdc7216bf06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.769338  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 11:17:15.769347  728764 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 36.785µs
	I1123 11:17:15.769353  728764 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 11:17:15.769365  728764 cache.go:107] acquiring lock: {Name:mk4587c20b249962e3069b49ff037489e732b445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.769396  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 11:17:15.769443  728764 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 78.098µs
	I1123 11:17:15.769453  728764 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 11:17:15.769463  728764 cache.go:107] acquiring lock: {Name:mkca83e04fc9d3d0ddd0883ea37274aeef2e425f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.769499  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 11:17:15.769508  728764 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 46.237µs
	I1123 11:17:15.769517  728764 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 11:17:15.769528  728764 cache.go:107] acquiring lock: {Name:mkfca1c676349ac6eca36139e91546ebead8718a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.769561  728764 cache.go:115] /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 11:17:15.769571  728764 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 43.965µs
	I1123 11:17:15.769577  728764 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 11:17:15.769584  728764 cache.go:87] Successfully saved all images to host disk.
	I1123 11:17:15.788821  728764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:17:15.788845  728764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:17:15.788868  728764 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:17:15.788900  728764 start.go:360] acquireMachinesLock for no-preload-258179: {Name:mkd7fac6331974361e1c4a4ff23107b024de164a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:15.788964  728764 start.go:364] duration metric: took 44.26µs to acquireMachinesLock for "no-preload-258179"
	I1123 11:17:15.788987  728764 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:17:15.788998  728764 fix.go:54] fixHost starting: 
	I1123 11:17:15.789274  728764 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:17:15.806272  728764 fix.go:112] recreateIfNeeded on no-preload-258179: state=Stopped err=<nil>
	W1123 11:17:15.806303  728764 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:17:11.234197  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:17:13.733595  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	W1123 11:17:16.234130  724363 node_ready.go:57] node "embed-certs-715679" has "Ready":"False" status (will retry)
	I1123 11:17:17.233563  724363 node_ready.go:49] node "embed-certs-715679" is "Ready"
	I1123 11:17:17.233595  724363 node_ready.go:38] duration metric: took 40.502865868s for node "embed-certs-715679" to be "Ready" ...
	I1123 11:17:17.233609  724363 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:17:17.233671  724363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:17:17.245805  724363 api_server.go:72] duration metric: took 41.829299861s to wait for apiserver process to appear ...
	I1123 11:17:17.245834  724363 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:17:17.245872  724363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:17:17.254876  724363 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:17:17.255948  724363 api_server.go:141] control plane version: v1.34.1
	I1123 11:17:17.255974  724363 api_server.go:131] duration metric: took 10.117512ms to wait for apiserver health ...
	I1123 11:17:17.256010  724363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:17:17.259144  724363 system_pods.go:59] 8 kube-system pods found
	I1123 11:17:17.259190  724363 system_pods.go:61] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:17:17.259196  724363 system_pods.go:61] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running
	I1123 11:17:17.259202  724363 system_pods.go:61] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:17:17.259206  724363 system_pods.go:61] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running
	I1123 11:17:17.259211  724363 system_pods.go:61] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running
	I1123 11:17:17.259215  724363 system_pods.go:61] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:17:17.259219  724363 system_pods.go:61] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running
	I1123 11:17:17.259225  724363 system_pods.go:61] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:17:17.259239  724363 system_pods.go:74] duration metric: took 3.215971ms to wait for pod list to return data ...
	I1123 11:17:17.259248  724363 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:17:17.262166  724363 default_sa.go:45] found service account: "default"
	I1123 11:17:17.262191  724363 default_sa.go:55] duration metric: took 2.934404ms for default service account to be created ...
	I1123 11:17:17.262202  724363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:17:17.265531  724363 system_pods.go:86] 8 kube-system pods found
	I1123 11:17:17.265566  724363 system_pods.go:89] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:17:17.265574  724363 system_pods.go:89] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running
	I1123 11:17:17.265581  724363 system_pods.go:89] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:17:17.265586  724363 system_pods.go:89] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running
	I1123 11:17:17.265591  724363 system_pods.go:89] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running
	I1123 11:17:17.265597  724363 system_pods.go:89] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:17:17.265602  724363 system_pods.go:89] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running
	I1123 11:17:17.265613  724363 system_pods.go:89] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:17:17.265639  724363 retry.go:31] will retry after 274.322856ms: missing components: kube-dns
	I1123 11:17:17.544774  724363 system_pods.go:86] 8 kube-system pods found
	I1123 11:17:17.544824  724363 system_pods.go:89] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:17:17.544833  724363 system_pods.go:89] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running
	I1123 11:17:17.544839  724363 system_pods.go:89] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:17:17.544843  724363 system_pods.go:89] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running
	I1123 11:17:17.544848  724363 system_pods.go:89] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running
	I1123 11:17:17.544851  724363 system_pods.go:89] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:17:17.544862  724363 system_pods.go:89] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running
	I1123 11:17:17.544871  724363 system_pods.go:89] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:17:17.544886  724363 retry.go:31] will retry after 353.889779ms: missing components: kube-dns
	I1123 11:17:17.903079  724363 system_pods.go:86] 8 kube-system pods found
	I1123 11:17:17.903118  724363 system_pods.go:89] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:17:17.903125  724363 system_pods.go:89] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running
	I1123 11:17:17.903131  724363 system_pods.go:89] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:17:17.903169  724363 system_pods.go:89] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running
	I1123 11:17:17.903181  724363 system_pods.go:89] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running
	I1123 11:17:17.903186  724363 system_pods.go:89] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:17:17.903190  724363 system_pods.go:89] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running
	I1123 11:17:17.903195  724363 system_pods.go:89] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:17:17.903212  724363 retry.go:31] will retry after 383.988516ms: missing components: kube-dns
	I1123 11:17:18.291004  724363 system_pods.go:86] 8 kube-system pods found
	I1123 11:17:18.291043  724363 system_pods.go:89] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:17:18.291051  724363 system_pods.go:89] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running
	I1123 11:17:18.291071  724363 system_pods.go:89] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:17:18.291082  724363 system_pods.go:89] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running
	I1123 11:17:18.291087  724363 system_pods.go:89] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running
	I1123 11:17:18.291099  724363 system_pods.go:89] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:17:18.291110  724363 system_pods.go:89] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running
	I1123 11:17:18.291116  724363 system_pods.go:89] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:17:18.291131  724363 retry.go:31] will retry after 367.748291ms: missing components: kube-dns
	I1123 11:17:18.663280  724363 system_pods.go:86] 8 kube-system pods found
	I1123 11:17:18.663314  724363 system_pods.go:89] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Running
	I1123 11:17:18.663322  724363 system_pods.go:89] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running
	I1123 11:17:18.663326  724363 system_pods.go:89] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:17:18.663355  724363 system_pods.go:89] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running
	I1123 11:17:18.663366  724363 system_pods.go:89] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running
	I1123 11:17:18.663370  724363 system_pods.go:89] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:17:18.663374  724363 system_pods.go:89] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running
	I1123 11:17:18.663383  724363 system_pods.go:89] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Running
	I1123 11:17:18.663391  724363 system_pods.go:126] duration metric: took 1.401183225s to wait for k8s-apps to be running ...
	I1123 11:17:18.663401  724363 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:17:18.663472  724363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:17:18.676495  724363 system_svc.go:56] duration metric: took 13.08312ms WaitForService to wait for kubelet
	I1123 11:17:18.676576  724363 kubeadm.go:587] duration metric: took 43.260075783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:17:18.676610  724363 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:17:18.680111  724363 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:17:18.680143  724363 node_conditions.go:123] node cpu capacity is 2
	I1123 11:17:18.680159  724363 node_conditions.go:105] duration metric: took 3.524943ms to run NodePressure ...
	I1123 11:17:18.680190  724363 start.go:242] waiting for startup goroutines ...
	I1123 11:17:18.680202  724363 start.go:247] waiting for cluster config update ...
	I1123 11:17:18.680214  724363 start.go:256] writing updated cluster config ...
	I1123 11:17:18.680514  724363 ssh_runner.go:195] Run: rm -f paused
	I1123 11:17:18.684201  724363 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:17:18.687837  724363 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9gghc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:18.692910  724363 pod_ready.go:94] pod "coredns-66bc5c9577-9gghc" is "Ready"
	I1123 11:17:18.692938  724363 pod_ready.go:86] duration metric: took 5.074424ms for pod "coredns-66bc5c9577-9gghc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:18.695616  724363 pod_ready.go:83] waiting for pod "etcd-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:18.700216  724363 pod_ready.go:94] pod "etcd-embed-certs-715679" is "Ready"
	I1123 11:17:18.700242  724363 pod_ready.go:86] duration metric: took 4.60009ms for pod "etcd-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:18.702640  724363 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:18.708220  724363 pod_ready.go:94] pod "kube-apiserver-embed-certs-715679" is "Ready"
	I1123 11:17:18.708250  724363 pod_ready.go:86] duration metric: took 5.582433ms for pod "kube-apiserver-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:18.710752  724363 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:19.087840  724363 pod_ready.go:94] pod "kube-controller-manager-embed-certs-715679" is "Ready"
	I1123 11:17:19.087866  724363 pod_ready.go:86] duration metric: took 377.090924ms for pod "kube-controller-manager-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:19.288889  724363 pod_ready.go:83] waiting for pod "kube-proxy-84tx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:19.692148  724363 pod_ready.go:94] pod "kube-proxy-84tx6" is "Ready"
	I1123 11:17:19.692171  724363 pod_ready.go:86] duration metric: took 403.257609ms for pod "kube-proxy-84tx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:19.888839  724363 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:20.288536  724363 pod_ready.go:94] pod "kube-scheduler-embed-certs-715679" is "Ready"
	I1123 11:17:20.288570  724363 pod_ready.go:86] duration metric: took 399.70545ms for pod "kube-scheduler-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:17:20.288583  724363 pod_ready.go:40] duration metric: took 1.604349614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:17:20.370279  724363 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:17:20.374076  724363 out.go:179] * Done! kubectl is now configured to use "embed-certs-715679" cluster and "default" namespace by default
	I1123 11:17:15.809571  728764 out.go:252] * Restarting existing docker container for "no-preload-258179" ...
	I1123 11:17:15.809652  728764 cli_runner.go:164] Run: docker start no-preload-258179
	I1123 11:17:16.089121  728764 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:17:16.112444  728764 kic.go:430] container "no-preload-258179" state is running.
	I1123 11:17:16.113711  728764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-258179
	I1123 11:17:16.136981  728764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/config.json ...
	I1123 11:17:16.137218  728764 machine.go:94] provisionDockerMachine start ...
	I1123 11:17:16.137276  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:16.160038  728764 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:16.160358  728764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1123 11:17:16.160367  728764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:17:16.161513  728764 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:17:19.313631  728764 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-258179
	
	I1123 11:17:19.313718  728764 ubuntu.go:182] provisioning hostname "no-preload-258179"
	I1123 11:17:19.313804  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:19.332861  728764 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:19.333168  728764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1123 11:17:19.333188  728764 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-258179 && echo "no-preload-258179" | sudo tee /etc/hostname
	I1123 11:17:19.501607  728764 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-258179
	
	I1123 11:17:19.501685  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:19.519627  728764 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:19.520168  728764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1123 11:17:19.520195  728764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-258179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-258179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-258179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:17:19.674160  728764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:17:19.674186  728764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:17:19.674213  728764 ubuntu.go:190] setting up certificates
	I1123 11:17:19.674240  728764 provision.go:84] configureAuth start
	I1123 11:17:19.674328  728764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-258179
	I1123 11:17:19.707847  728764 provision.go:143] copyHostCerts
	I1123 11:17:19.707925  728764 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:17:19.707943  728764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:17:19.708018  728764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:17:19.708127  728764 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:17:19.708139  728764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:17:19.708165  728764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:17:19.708262  728764 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:17:19.708271  728764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:17:19.708296  728764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:17:19.708355  728764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.no-preload-258179 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-258179]
	I1123 11:17:19.839562  728764 provision.go:177] copyRemoteCerts
	I1123 11:17:19.839655  728764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:17:19.839713  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:19.856938  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:19.961254  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:17:19.982459  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:17:20.003252  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:17:20.025270  728764 provision.go:87] duration metric: took 350.999023ms to configureAuth
	I1123 11:17:20.025303  728764 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:17:20.025601  728764 config.go:182] Loaded profile config "no-preload-258179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:20.025752  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:20.045809  728764 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:20.046142  728764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1123 11:17:20.046161  728764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:17:20.475234  728764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:17:20.475347  728764 machine.go:97] duration metric: took 4.33810276s to provisionDockerMachine
	I1123 11:17:20.475365  728764 start.go:293] postStartSetup for "no-preload-258179" (driver="docker")
	I1123 11:17:20.475378  728764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:17:20.475556  728764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:17:20.475801  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:20.518062  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:20.645132  728764 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:17:20.655776  728764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:17:20.655803  728764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:17:20.655814  728764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:17:20.655873  728764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:17:20.655947  728764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:17:20.656056  728764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:17:20.667698  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:17:20.690219  728764 start.go:296] duration metric: took 214.838459ms for postStartSetup
	I1123 11:17:20.690315  728764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:17:20.690359  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:20.708802  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:20.810426  728764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:17:20.815081  728764 fix.go:56] duration metric: took 5.026076713s for fixHost
	I1123 11:17:20.815109  728764 start.go:83] releasing machines lock for "no-preload-258179", held for 5.026133567s
	I1123 11:17:20.815197  728764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-258179
	I1123 11:17:20.833737  728764 ssh_runner.go:195] Run: cat /version.json
	I1123 11:17:20.833796  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:20.834081  728764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:17:20.834146  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:20.864094  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:20.865268  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:21.086589  728764 ssh_runner.go:195] Run: systemctl --version
	I1123 11:17:21.093095  728764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:17:21.132411  728764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:17:21.137456  728764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:17:21.137563  728764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:17:21.147442  728764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:17:21.147477  728764 start.go:496] detecting cgroup driver to use...
	I1123 11:17:21.147527  728764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:17:21.147591  728764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:17:21.163376  728764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:17:21.178958  728764 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:17:21.179021  728764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:17:21.195879  728764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:17:21.211256  728764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:17:21.331190  728764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:17:21.457673  728764 docker.go:234] disabling docker service ...
	I1123 11:17:21.457765  728764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:17:21.474642  728764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:17:21.488076  728764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:17:21.628345  728764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:17:21.758952  728764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:17:21.772019  728764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:17:21.787412  728764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:17:21.787485  728764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:21.796716  728764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:17:21.796840  728764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:21.806774  728764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:21.816154  728764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:21.825311  728764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:17:21.834229  728764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:21.843680  728764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:21.852098  728764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:21.861309  728764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:17:21.870239  728764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:17:21.878103  728764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:21.990061  728764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:17:22.178119  728764 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:17:22.178271  728764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:17:22.186053  728764 start.go:564] Will wait 60s for crictl version
	I1123 11:17:22.186197  728764 ssh_runner.go:195] Run: which crictl
	I1123 11:17:22.190369  728764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:17:22.220155  728764 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:17:22.220312  728764 ssh_runner.go:195] Run: crio --version
	I1123 11:17:22.249070  728764 ssh_runner.go:195] Run: crio --version
	I1123 11:17:22.282545  728764 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:17:22.285558  728764 cli_runner.go:164] Run: docker network inspect no-preload-258179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:17:22.301852  728764 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:17:22.305527  728764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:17:22.315527  728764 kubeadm.go:884] updating cluster {Name:no-preload-258179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-258179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:17:22.315661  728764 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:17:22.315704  728764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:17:22.349831  728764 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:17:22.349854  728764 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:17:22.349861  728764 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 11:17:22.349956  728764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-258179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-258179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:17:22.350038  728764 ssh_runner.go:195] Run: crio config
	I1123 11:17:22.409649  728764 cni.go:84] Creating CNI manager for ""
	I1123 11:17:22.409671  728764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:17:22.409696  728764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:17:22.409722  728764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-258179 NodeName:no-preload-258179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:17:22.409853  728764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-258179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:17:22.409942  728764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:17:22.418943  728764 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:17:22.419057  728764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:17:22.426960  728764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 11:17:22.440686  728764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:17:22.453817  728764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 11:17:22.467643  728764 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:17:22.471680  728764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:17:22.482114  728764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:22.606373  728764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:17:22.626753  728764 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179 for IP: 192.168.85.2
	I1123 11:17:22.626776  728764 certs.go:195] generating shared ca certs ...
	I1123 11:17:22.626794  728764 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:22.627002  728764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:17:22.627072  728764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:17:22.627086  728764 certs.go:257] generating profile certs ...
	I1123 11:17:22.627211  728764 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.key
	I1123 11:17:22.627326  728764 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key.016482d5
	I1123 11:17:22.627406  728764 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.key
	I1123 11:17:22.627575  728764 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:17:22.627634  728764 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:17:22.627652  728764 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:17:22.627699  728764 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:17:22.627755  728764 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:17:22.627787  728764 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:17:22.627869  728764 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:17:22.628566  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:17:22.651260  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:17:22.677570  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:17:22.698245  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:17:22.721701  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:17:22.747111  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:17:22.768127  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:17:22.797013  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:17:22.819035  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:17:22.840085  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:17:22.863738  728764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:17:22.894784  728764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:17:22.918700  728764 ssh_runner.go:195] Run: openssl version
	I1123 11:17:22.935822  728764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:17:22.949932  728764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:22.956250  728764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:22.956343  728764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:23.009244  728764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:17:23.019845  728764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:17:23.030288  728764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:17:23.035021  728764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:17:23.035117  728764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:17:23.082492  728764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:17:23.092012  728764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:17:23.101987  728764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:17:23.106196  728764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:17:23.106288  728764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:17:23.156297  728764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:17:23.165322  728764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:17:23.169734  728764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:17:23.211967  728764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:17:23.255435  728764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:17:23.311385  728764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:17:23.385777  728764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:17:23.447140  728764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:17:23.574883  728764 kubeadm.go:401] StartCluster: {Name:no-preload-258179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-258179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:23.574978  728764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:17:23.575058  728764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:17:23.632877  728764 cri.go:89] found id: "762418eef7f5d57e699ef90acb86c4c9536c1542ec092c57afbb3936b8bccbf0"
	I1123 11:17:23.632901  728764 cri.go:89] found id: "61200a3335e64686b202c4b4402ab443dd01b7464a2ab00988d127cf932cb937"
	I1123 11:17:23.632906  728764 cri.go:89] found id: "da30f05ba9041e558527bda7b8ad6c0615aca7408e5d54c45850e08dc7dc706d"
	I1123 11:17:23.632909  728764 cri.go:89] found id: "329ee3cb780bc0ff84833eede69619e39622914b4a5243d5aacfed9e80e40108"
	I1123 11:17:23.632913  728764 cri.go:89] found id: ""
	I1123 11:17:23.632985  728764 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:17:23.659223  728764 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:17:23Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:17:23.659332  728764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:17:23.674658  728764 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:17:23.674679  728764 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:17:23.674756  728764 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:17:23.689137  728764 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:17:23.690045  728764 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-258179" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:23.690603  728764 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-258179" cluster setting kubeconfig missing "no-preload-258179" context setting]
	I1123 11:17:23.691382  728764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:23.693379  728764 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:17:23.707349  728764 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 11:17:23.707384  728764 kubeadm.go:602] duration metric: took 32.697915ms to restartPrimaryControlPlane
	I1123 11:17:23.707394  728764 kubeadm.go:403] duration metric: took 132.523479ms to StartCluster
	I1123 11:17:23.707428  728764 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:23.707508  728764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:23.709030  728764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:23.709322  728764 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:17:23.709894  728764 config.go:182] Loaded profile config "no-preload-258179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:23.709863  728764 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:17:23.709943  728764 addons.go:70] Setting dashboard=true in profile "no-preload-258179"
	I1123 11:17:23.709952  728764 addons.go:70] Setting default-storageclass=true in profile "no-preload-258179"
	I1123 11:17:23.709956  728764 addons.go:239] Setting addon dashboard=true in "no-preload-258179"
	W1123 11:17:23.709963  728764 addons.go:248] addon dashboard should already be in state true
	I1123 11:17:23.709965  728764 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-258179"
	I1123 11:17:23.709985  728764 host.go:66] Checking if "no-preload-258179" exists ...
	I1123 11:17:23.710265  728764 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:17:23.710512  728764 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:17:23.709943  728764 addons.go:70] Setting storage-provisioner=true in profile "no-preload-258179"
	I1123 11:17:23.714787  728764 addons.go:239] Setting addon storage-provisioner=true in "no-preload-258179"
	W1123 11:17:23.714819  728764 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:17:23.714886  728764 host.go:66] Checking if "no-preload-258179" exists ...
	I1123 11:17:23.719780  728764 out.go:179] * Verifying Kubernetes components...
	I1123 11:17:23.720406  728764 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:17:23.728482  728764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:23.765110  728764 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:17:23.768154  728764 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:17:23.771264  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:17:23.771288  728764 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:17:23.771361  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:23.782862  728764 addons.go:239] Setting addon default-storageclass=true in "no-preload-258179"
	W1123 11:17:23.782882  728764 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:17:23.782922  728764 host.go:66] Checking if "no-preload-258179" exists ...
	I1123 11:17:23.783343  728764 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:17:23.794951  728764 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:17:23.801627  728764 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:17:23.801657  728764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:17:23.801739  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:23.813758  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:23.833568  728764 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:17:23.833594  728764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:17:23.833657  728764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:17:23.847069  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:23.876398  728764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:17:24.062281  728764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:17:24.096033  728764 node_ready.go:35] waiting up to 6m0s for node "no-preload-258179" to be "Ready" ...
	I1123 11:17:24.121313  728764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:17:24.161643  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:17:24.161719  728764 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:17:24.190064  728764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:17:24.214751  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:17:24.214825  728764 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:17:24.297842  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:17:24.297910  728764 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:17:24.364914  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:17:24.364986  728764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:17:24.416319  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:17:24.416394  728764 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:17:24.483094  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:17:24.483172  728764 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:17:24.531081  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:17:24.531160  728764 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:17:24.561903  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:17:24.561983  728764 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:17:24.584186  728764 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:17:24.584262  728764 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:17:24.611709  728764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 23 11:17:17 embed-certs-715679 crio[840]: time="2025-11-23T11:17:17.626633006Z" level=info msg="Created container e8fdd569f726340e5b371a170eddc42baf46778a3a021c8f243de51ce9586353: kube-system/coredns-66bc5c9577-9gghc/coredns" id=df4281fd-296c-4467-9f0e-f9251dc02a75 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:17:17 embed-certs-715679 crio[840]: time="2025-11-23T11:17:17.627785899Z" level=info msg="Starting container: e8fdd569f726340e5b371a170eddc42baf46778a3a021c8f243de51ce9586353" id=de643562-e03f-47d9-ac99-94267a63061f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:17:17 embed-certs-715679 crio[840]: time="2025-11-23T11:17:17.629628959Z" level=info msg="Started container" PID=1735 containerID=e8fdd569f726340e5b371a170eddc42baf46778a3a021c8f243de51ce9586353 description=kube-system/coredns-66bc5c9577-9gghc/coredns id=de643562-e03f-47d9-ac99-94267a63061f name=/runtime.v1.RuntimeService/StartContainer sandboxID=93cf903bcf2b3f97058e449f32a3ff8d105069a6d0f6494b594dae018e49fcb8
	Nov 23 11:17:20 embed-certs-715679 crio[840]: time="2025-11-23T11:17:20.971509757Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b51bb3ff-2cf3-493d-b62d-339807e7341c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:17:20 embed-certs-715679 crio[840]: time="2025-11-23T11:17:20.97157933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:17:20 embed-certs-715679 crio[840]: time="2025-11-23T11:17:20.981334231Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ccb1864fbd80798fc2c4ce6e6ca2d1aa25f35883bbf20815a8ae68d8aa097025 UID:fbad8dcc-4eb1-420d-badc-d21b074bec9c NetNS:/var/run/netns/28a4b5ab-e7c9-4a1f-93cc-b5e85be3427d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000790d0}] Aliases:map[]}"
	Nov 23 11:17:20 embed-certs-715679 crio[840]: time="2025-11-23T11:17:20.981371459Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 11:17:20 embed-certs-715679 crio[840]: time="2025-11-23T11:17:20.992833252Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ccb1864fbd80798fc2c4ce6e6ca2d1aa25f35883bbf20815a8ae68d8aa097025 UID:fbad8dcc-4eb1-420d-badc-d21b074bec9c NetNS:/var/run/netns/28a4b5ab-e7c9-4a1f-93cc-b5e85be3427d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000790d0}] Aliases:map[]}"
	Nov 23 11:17:20 embed-certs-715679 crio[840]: time="2025-11-23T11:17:20.992974843Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 11:17:21 embed-certs-715679 crio[840]: time="2025-11-23T11:17:21.000116325Z" level=info msg="Ran pod sandbox ccb1864fbd80798fc2c4ce6e6ca2d1aa25f35883bbf20815a8ae68d8aa097025 with infra container: default/busybox/POD" id=b51bb3ff-2cf3-493d-b62d-339807e7341c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:17:21 embed-certs-715679 crio[840]: time="2025-11-23T11:17:21.002210099Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fdf43a15-4bd4-44b8-9155-38eb1c8554be name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:17:21 embed-certs-715679 crio[840]: time="2025-11-23T11:17:21.00239979Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fdf43a15-4bd4-44b8-9155-38eb1c8554be name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:17:21 embed-certs-715679 crio[840]: time="2025-11-23T11:17:21.002470274Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=fdf43a15-4bd4-44b8-9155-38eb1c8554be name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:17:21 embed-certs-715679 crio[840]: time="2025-11-23T11:17:21.005765672Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9ad0a5ec-f1a0-411f-bbdf-6566f35b3f60 name=/runtime.v1.ImageService/PullImage
	Nov 23 11:17:21 embed-certs-715679 crio[840]: time="2025-11-23T11:17:21.008845623Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.236033304Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9ad0a5ec-f1a0-411f-bbdf-6566f35b3f60 name=/runtime.v1.ImageService/PullImage
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.237512736Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f91a768f-7d5a-42d2-b279-cf54df9c706a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.241654909Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea744489-724d-4df8-a6a3-bd88702df224 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.249665656Z" level=info msg="Creating container: default/busybox/busybox" id=5aeb8d7c-fac7-4ee7-abaf-3467353a89a1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.249930853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.255926786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.256805373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.280770831Z" level=info msg="Created container 06354b97274dc7d32c33c20a6e83449d7b8f987084588b32612722655feb531b: default/busybox/busybox" id=5aeb8d7c-fac7-4ee7-abaf-3467353a89a1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.286044193Z" level=info msg="Starting container: 06354b97274dc7d32c33c20a6e83449d7b8f987084588b32612722655feb531b" id=58f20477-0be2-44a5-9123-37cc37cffcea name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:17:23 embed-certs-715679 crio[840]: time="2025-11-23T11:17:23.29903387Z" level=info msg="Started container" PID=1792 containerID=06354b97274dc7d32c33c20a6e83449d7b8f987084588b32612722655feb531b description=default/busybox/busybox id=58f20477-0be2-44a5-9123-37cc37cffcea name=/runtime.v1.RuntimeService/StartContainer sandboxID=ccb1864fbd80798fc2c4ce6e6ca2d1aa25f35883bbf20815a8ae68d8aa097025
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	06354b97274dc       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   ccb1864fbd807       busybox                                      default
	e8fdd569f7263       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   93cf903bcf2b3       coredns-66bc5c9577-9gghc                     kube-system
	d14bebf47d856       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   af714cb09969a       storage-provisioner                          kube-system
	eadb072e9c7c9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   9466a0640a3ef       kube-proxy-84tx6                             kube-system
	348ed9020203c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   57f3182cbab16       kindnet-gh5h2                                kube-system
	a87e95b521346       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   267063778b497       kube-controller-manager-embed-certs-715679   kube-system
	cfd269fcd70f4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   8024e36960596       kube-apiserver-embed-certs-715679            kube-system
	d3f603682a09a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   08df28fa560be       kube-scheduler-embed-certs-715679            kube-system
	603571c6a2405       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   16b0f44e30f1c       etcd-embed-certs-715679                      kube-system
	
	
	==> coredns [e8fdd569f726340e5b371a170eddc42baf46778a3a021c8f243de51ce9586353] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49512 - 17001 "HINFO IN 8128643999046918559.7593720342560858744. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023832789s
	
	
	==> describe nodes <==
	Name:               embed-certs-715679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-715679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-715679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_16_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:16:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-715679
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:17:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:17:17 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:17:17 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:17:17 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:17:17 +0000   Sun, 23 Nov 2025 11:17:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-715679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0f9e54f4-bafa-460f-a78e-697026168606
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-9gghc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-715679                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-gh5h2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-715679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-715679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-84tx6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-715679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-715679 event: Registered Node embed-certs-715679 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-715679 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [603571c6a24055e96f103cef67876836f126ca627c9e6caf2a7ac1c587584619] <==
	{"level":"warn","ts":"2025-11-23T11:16:23.917534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:23.951008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:23.984706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.031522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.058209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.095363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.120819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.147621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.179776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.236432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.300826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.361562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.393495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.423711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.484432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.536048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.572663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.604640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.641065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.669249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.774124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.842716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.863156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:24.919081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:16:25.090644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41634","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:17:31 up  3:59,  0 user,  load average: 4.10, 3.64, 2.95
	Linux embed-certs-715679 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [348ed9020203c27ad442668def621602e7e6c864d16a9a0810bda4a2c238e47f] <==
	I1123 11:16:36.662691       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:16:36.663118       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:16:36.663301       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:16:36.663354       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:16:36.663394       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:16:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:16:36.859888       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:16:36.859955       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:16:36.859989       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:16:36.860737       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:17:06.860506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:17:06.860511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:17:06.860638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 11:17:06.860794       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 11:17:08.060474       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:17:08.060508       1 metrics.go:72] Registering metrics
	I1123 11:17:08.060579       1 controller.go:711] "Syncing nftables rules"
	I1123 11:17:16.860441       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:17:16.860505       1 main.go:301] handling current node
	I1123 11:17:26.861465       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:17:26.861537       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cfd269fcd70f46f7f78b320364f39ab82ddf6e24b19e84759ea8d34c73d0ca57] <==
	I1123 11:16:27.011527       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:16:27.062458       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:27.066166       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 11:16:27.066476       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:16:27.108188       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:27.108274       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:16:27.122090       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 11:16:27.543366       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 11:16:27.553905       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 11:16:27.553997       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:16:28.422110       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:16:28.486162       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:16:28.620381       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 11:16:28.629379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 11:16:28.630679       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:16:28.641100       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:16:29.037263       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:16:29.860084       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:16:29.884083       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 11:16:29.898476       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 11:16:34.881976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:34.887428       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:16:34.976900       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:16:35.134667       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 11:17:28.839290       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:49846: use of closed network connection
	
	
	==> kube-controller-manager [a87e95b5213463ddf068b81b2cf97e00aeac9491174d6c5381dbdba4c87ed071] <==
	I1123 11:16:34.130726       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:16:34.130738       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:16:34.137901       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:16:34.141773       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:16:34.141932       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:16:34.150756       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 11:16:34.152169       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 11:16:34.152233       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 11:16:34.152262       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 11:16:34.152267       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 11:16:34.152278       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 11:16:34.153400       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 11:16:34.165277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:16:34.172884       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 11:16:34.173183       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:16:34.173241       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:16:34.173322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-715679"
	I1123 11:16:34.173361       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 11:16:34.174089       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 11:16:34.174137       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:16:34.174468       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 11:16:34.182736       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:16:34.198671       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:16:34.210162       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-715679" podCIDRs=["10.244.0.0/24"]
	I1123 11:17:19.179046       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [eadb072e9c7c95358aae02833ef7fa8c839ef386549486fe7cf719c6749c8f98] <==
	I1123 11:16:36.672107       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:16:36.771068       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:16:36.871905       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:16:36.871951       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:16:36.872047       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:16:36.893105       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:16:36.893165       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:16:36.897932       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:16:36.898377       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:16:36.898399       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:16:36.900122       1 config.go:200] "Starting service config controller"
	I1123 11:16:36.900187       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:16:36.900245       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:16:36.900274       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:16:36.900312       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:16:36.900341       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:16:36.902747       1 config.go:309] "Starting node config controller"
	I1123 11:16:36.904727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:16:36.904804       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:16:37.001982       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:16:37.001996       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:16:37.002049       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d3f603682a09a654a64600023e77011bd7b7a54973e6c012a5120058de088fbb] <==
	I1123 11:16:27.027557       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1123 11:16:27.053620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:16:27.076150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:16:27.076248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:16:27.076349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:16:27.076412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:16:27.076471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:16:27.076528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:16:27.076576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:16:27.076664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:16:27.076720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:16:27.076797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:16:27.076820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:16:27.076876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:16:27.076930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:16:27.077006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 11:16:27.077069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 11:16:27.077113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:16:27.077232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:16:27.077298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:16:27.857795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:16:28.021681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:16:28.111422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:16:28.163595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1123 11:16:30.828453       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:16:34 embed-certs-715679 kubelet[1312]: I1123 11:16:34.245769    1312 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 11:16:34 embed-certs-715679 kubelet[1312]: I1123 11:16:34.246896    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: E1123 11:16:35.218495    1312 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-715679\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-715679' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352703    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f553ae5d-e205-4c1e-8075-3a9746cb32da-xtables-lock\") pod \"kindnet-gh5h2\" (UID: \"f553ae5d-e205-4c1e-8075-3a9746cb32da\") " pod="kube-system/kindnet-gh5h2"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352761    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f553ae5d-e205-4c1e-8075-3a9746cb32da-lib-modules\") pod \"kindnet-gh5h2\" (UID: \"f553ae5d-e205-4c1e-8075-3a9746cb32da\") " pod="kube-system/kindnet-gh5h2"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352783    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h24qq\" (UniqueName: \"kubernetes.io/projected/f553ae5d-e205-4c1e-8075-3a9746cb32da-kube-api-access-h24qq\") pod \"kindnet-gh5h2\" (UID: \"f553ae5d-e205-4c1e-8075-3a9746cb32da\") " pod="kube-system/kindnet-gh5h2"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352802    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/904f9b00-4ea3-4184-b263-d052bb538d98-kube-proxy\") pod \"kube-proxy-84tx6\" (UID: \"904f9b00-4ea3-4184-b263-d052bb538d98\") " pod="kube-system/kube-proxy-84tx6"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352820    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bscws\" (UniqueName: \"kubernetes.io/projected/904f9b00-4ea3-4184-b263-d052bb538d98-kube-api-access-bscws\") pod \"kube-proxy-84tx6\" (UID: \"904f9b00-4ea3-4184-b263-d052bb538d98\") " pod="kube-system/kube-proxy-84tx6"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352842    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f553ae5d-e205-4c1e-8075-3a9746cb32da-cni-cfg\") pod \"kindnet-gh5h2\" (UID: \"f553ae5d-e205-4c1e-8075-3a9746cb32da\") " pod="kube-system/kindnet-gh5h2"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352859    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/904f9b00-4ea3-4184-b263-d052bb538d98-xtables-lock\") pod \"kube-proxy-84tx6\" (UID: \"904f9b00-4ea3-4184-b263-d052bb538d98\") " pod="kube-system/kube-proxy-84tx6"
	Nov 23 11:16:35 embed-certs-715679 kubelet[1312]: I1123 11:16:35.352877    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/904f9b00-4ea3-4184-b263-d052bb538d98-lib-modules\") pod \"kube-proxy-84tx6\" (UID: \"904f9b00-4ea3-4184-b263-d052bb538d98\") " pod="kube-system/kube-proxy-84tx6"
	Nov 23 11:16:36 embed-certs-715679 kubelet[1312]: I1123 11:16:36.334449    1312 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:16:36 embed-certs-715679 kubelet[1312]: W1123 11:16:36.483204    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/crio-9466a0640a3efa9043a65c3d762d2cc4a5b3eab62dba454cc5f9ae2dac315ed7 WatchSource:0}: Error finding container 9466a0640a3efa9043a65c3d762d2cc4a5b3eab62dba454cc5f9ae2dac315ed7: Status 404 returned error can't find the container with id 9466a0640a3efa9043a65c3d762d2cc4a5b3eab62dba454cc5f9ae2dac315ed7
	Nov 23 11:16:37 embed-certs-715679 kubelet[1312]: I1123 11:16:37.402337    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-84tx6" podStartSLOduration=2.402307154 podStartE2EDuration="2.402307154s" podCreationTimestamp="2025-11-23 11:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:16:37.383293311 +0000 UTC m=+7.561181007" watchObservedRunningTime="2025-11-23 11:16:37.402307154 +0000 UTC m=+7.580194834"
	Nov 23 11:16:38 embed-certs-715679 kubelet[1312]: I1123 11:16:38.323932    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gh5h2" podStartSLOduration=3.323911851 podStartE2EDuration="3.323911851s" podCreationTimestamp="2025-11-23 11:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:16:37.403166339 +0000 UTC m=+7.581054020" watchObservedRunningTime="2025-11-23 11:16:38.323911851 +0000 UTC m=+8.501799523"
	Nov 23 11:17:17 embed-certs-715679 kubelet[1312]: I1123 11:17:17.104814    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 11:17:17 embed-certs-715679 kubelet[1312]: I1123 11:17:17.275849    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fef3a639-c516-41e3-a3d5-c7a49af7cd71-tmp\") pod \"storage-provisioner\" (UID: \"fef3a639-c516-41e3-a3d5-c7a49af7cd71\") " pod="kube-system/storage-provisioner"
	Nov 23 11:17:17 embed-certs-715679 kubelet[1312]: I1123 11:17:17.275901    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ds8f\" (UniqueName: \"kubernetes.io/projected/fef3a639-c516-41e3-a3d5-c7a49af7cd71-kube-api-access-2ds8f\") pod \"storage-provisioner\" (UID: \"fef3a639-c516-41e3-a3d5-c7a49af7cd71\") " pod="kube-system/storage-provisioner"
	Nov 23 11:17:17 embed-certs-715679 kubelet[1312]: I1123 11:17:17.275925    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99a3e5e-e56b-48b0-8413-324ec3f36f2b-config-volume\") pod \"coredns-66bc5c9577-9gghc\" (UID: \"d99a3e5e-e56b-48b0-8413-324ec3f36f2b\") " pod="kube-system/coredns-66bc5c9577-9gghc"
	Nov 23 11:17:17 embed-certs-715679 kubelet[1312]: I1123 11:17:17.275943    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsfn8\" (UniqueName: \"kubernetes.io/projected/d99a3e5e-e56b-48b0-8413-324ec3f36f2b-kube-api-access-gsfn8\") pod \"coredns-66bc5c9577-9gghc\" (UID: \"d99a3e5e-e56b-48b0-8413-324ec3f36f2b\") " pod="kube-system/coredns-66bc5c9577-9gghc"
	Nov 23 11:17:17 embed-certs-715679 kubelet[1312]: W1123 11:17:17.519214    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/crio-93cf903bcf2b3f97058e449f32a3ff8d105069a6d0f6494b594dae018e49fcb8 WatchSource:0}: Error finding container 93cf903bcf2b3f97058e449f32a3ff8d105069a6d0f6494b594dae018e49fcb8: Status 404 returned error can't find the container with id 93cf903bcf2b3f97058e449f32a3ff8d105069a6d0f6494b594dae018e49fcb8
	Nov 23 11:17:18 embed-certs-715679 kubelet[1312]: I1123 11:17:18.486033    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.486013591 podStartE2EDuration="42.486013591s" podCreationTimestamp="2025-11-23 11:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:17:18.471522843 +0000 UTC m=+48.649410540" watchObservedRunningTime="2025-11-23 11:17:18.486013591 +0000 UTC m=+48.663901271"
	Nov 23 11:17:20 embed-certs-715679 kubelet[1312]: I1123 11:17:20.661021    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9gghc" podStartSLOduration=45.661004221 podStartE2EDuration="45.661004221s" podCreationTimestamp="2025-11-23 11:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:17:18.486711546 +0000 UTC m=+48.664599218" watchObservedRunningTime="2025-11-23 11:17:20.661004221 +0000 UTC m=+50.838891901"
	Nov 23 11:17:20 embed-certs-715679 kubelet[1312]: I1123 11:17:20.807355    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pqvg\" (UniqueName: \"kubernetes.io/projected/fbad8dcc-4eb1-420d-badc-d21b074bec9c-kube-api-access-4pqvg\") pod \"busybox\" (UID: \"fbad8dcc-4eb1-420d-badc-d21b074bec9c\") " pod="default/busybox"
	Nov 23 11:17:21 embed-certs-715679 kubelet[1312]: W1123 11:17:20.997732    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/crio-ccb1864fbd80798fc2c4ce6e6ca2d1aa25f35883bbf20815a8ae68d8aa097025 WatchSource:0}: Error finding container ccb1864fbd80798fc2c4ce6e6ca2d1aa25f35883bbf20815a8ae68d8aa097025: Status 404 returned error can't find the container with id ccb1864fbd80798fc2c4ce6e6ca2d1aa25f35883bbf20815a8ae68d8aa097025
	
	
	==> storage-provisioner [d14bebf47d85696856288057b8c5282d66f13b77e21adc57813eb12d998b5252] <==
	I1123 11:17:17.642566       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:17:17.663697       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:17:17.663868       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:17:17.667396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:17.678137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:17:17.678385       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:17:17.678605       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-715679_1263ccc1-ed1a-4ae4-ae4c-c2745f195450!
	W1123 11:17:17.681997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:17:17.683362       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69cca960-8539-4f65-91a5-a2434eb78e5c", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-715679_1263ccc1-ed1a-4ae4-ae4c-c2745f195450 became leader
	W1123 11:17:17.695099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:17:17.783364       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-715679_1263ccc1-ed1a-4ae4-ae4c-c2745f195450!
	W1123 11:17:19.700605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:19.705560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:21.708554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:21.713793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:23.717631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:23.730334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:25.733149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:25.737841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:27.741257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:27.751338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:29.761016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:17:29.768546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-715679 -n embed-certs-715679
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-715679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-258179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-258179 --alsologtostderr -v=1: exit status 80 (2.017974475s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-258179 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:18:16.262887  733927 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:18:16.263037  733927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:16.263062  733927 out.go:374] Setting ErrFile to fd 2...
	I1123 11:18:16.263078  733927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:16.263349  733927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:18:16.263618  733927 out.go:368] Setting JSON to false
	I1123 11:18:16.263662  733927 mustload.go:66] Loading cluster: no-preload-258179
	I1123 11:18:16.264160  733927 config.go:182] Loaded profile config "no-preload-258179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:18:16.264735  733927 cli_runner.go:164] Run: docker container inspect no-preload-258179 --format={{.State.Status}}
	I1123 11:18:16.283328  733927 host.go:66] Checking if "no-preload-258179" exists ...
	I1123 11:18:16.283649  733927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:18:16.343243  733927 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 11:18:16.334064361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:18:16.343915  733927 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-258179 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 11:18:16.347286  733927 out.go:179] * Pausing node no-preload-258179 ... 
	I1123 11:18:16.351033  733927 host.go:66] Checking if "no-preload-258179" exists ...
	I1123 11:18:16.351377  733927 ssh_runner.go:195] Run: systemctl --version
	I1123 11:18:16.351432  733927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-258179
	I1123 11:18:16.369530  733927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/no-preload-258179/id_rsa Username:docker}
	I1123 11:18:16.476050  733927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:16.490616  733927 pause.go:52] kubelet running: true
	I1123 11:18:16.490720  733927 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:18:16.780973  733927 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:18:16.781072  733927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:18:16.861119  733927 cri.go:89] found id: "ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51"
	I1123 11:18:16.861138  733927 cri.go:89] found id: "010081a1f01c079a5d890d4c85e73f35bc105a15dba95abd6f350b1410ed39b1"
	I1123 11:18:16.861143  733927 cri.go:89] found id: "0335a26d74d9d24bfc0e1369259c9a742f2b779885f8ce02463fd36d44df7ee3"
	I1123 11:18:16.861147  733927 cri.go:89] found id: "5cd66489cc097137f796eb57822e7eda6b82ced4f0f5cdf2307f5a0da7fa3c43"
	I1123 11:18:16.861150  733927 cri.go:89] found id: "3c3ac16e0584a895c95fcb3ba7bb50a286a349a7d4d808b588fdbfeae8af1f72"
	I1123 11:18:16.861154  733927 cri.go:89] found id: "762418eef7f5d57e699ef90acb86c4c9536c1542ec092c57afbb3936b8bccbf0"
	I1123 11:18:16.861157  733927 cri.go:89] found id: "61200a3335e64686b202c4b4402ab443dd01b7464a2ab00988d127cf932cb937"
	I1123 11:18:16.861160  733927 cri.go:89] found id: "da30f05ba9041e558527bda7b8ad6c0615aca7408e5d54c45850e08dc7dc706d"
	I1123 11:18:16.861163  733927 cri.go:89] found id: "329ee3cb780bc0ff84833eede69619e39622914b4a5243d5aacfed9e80e40108"
	I1123 11:18:16.861172  733927 cri.go:89] found id: "4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	I1123 11:18:16.861176  733927 cri.go:89] found id: "213bd7542ea16400bbe0ca1960cd9729174df0c04ae6695ab974de746318339b"
	I1123 11:18:16.861179  733927 cri.go:89] found id: ""
	I1123 11:18:16.861226  733927 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:18:16.872945  733927 retry.go:31] will retry after 230.578562ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:18:17.104496  733927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:17.117465  733927 pause.go:52] kubelet running: false
	I1123 11:18:17.117548  733927 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:18:17.301173  733927 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:18:17.301354  733927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:18:17.372366  733927 cri.go:89] found id: "ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51"
	I1123 11:18:17.372389  733927 cri.go:89] found id: "010081a1f01c079a5d890d4c85e73f35bc105a15dba95abd6f350b1410ed39b1"
	I1123 11:18:17.372394  733927 cri.go:89] found id: "0335a26d74d9d24bfc0e1369259c9a742f2b779885f8ce02463fd36d44df7ee3"
	I1123 11:18:17.372398  733927 cri.go:89] found id: "5cd66489cc097137f796eb57822e7eda6b82ced4f0f5cdf2307f5a0da7fa3c43"
	I1123 11:18:17.372401  733927 cri.go:89] found id: "3c3ac16e0584a895c95fcb3ba7bb50a286a349a7d4d808b588fdbfeae8af1f72"
	I1123 11:18:17.372405  733927 cri.go:89] found id: "762418eef7f5d57e699ef90acb86c4c9536c1542ec092c57afbb3936b8bccbf0"
	I1123 11:18:17.372408  733927 cri.go:89] found id: "61200a3335e64686b202c4b4402ab443dd01b7464a2ab00988d127cf932cb937"
	I1123 11:18:17.372411  733927 cri.go:89] found id: "da30f05ba9041e558527bda7b8ad6c0615aca7408e5d54c45850e08dc7dc706d"
	I1123 11:18:17.372414  733927 cri.go:89] found id: "329ee3cb780bc0ff84833eede69619e39622914b4a5243d5aacfed9e80e40108"
	I1123 11:18:17.372427  733927 cri.go:89] found id: "4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	I1123 11:18:17.372430  733927 cri.go:89] found id: "213bd7542ea16400bbe0ca1960cd9729174df0c04ae6695ab974de746318339b"
	I1123 11:18:17.372434  733927 cri.go:89] found id: ""
	I1123 11:18:17.372483  733927 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:18:17.384421  733927 retry.go:31] will retry after 538.986997ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:17Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:18:17.924107  733927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:17.937339  733927 pause.go:52] kubelet running: false
	I1123 11:18:17.937496  733927 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:18:18.119548  733927 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:18:18.119675  733927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:18:18.191004  733927 cri.go:89] found id: "ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51"
	I1123 11:18:18.191033  733927 cri.go:89] found id: "010081a1f01c079a5d890d4c85e73f35bc105a15dba95abd6f350b1410ed39b1"
	I1123 11:18:18.191038  733927 cri.go:89] found id: "0335a26d74d9d24bfc0e1369259c9a742f2b779885f8ce02463fd36d44df7ee3"
	I1123 11:18:18.191042  733927 cri.go:89] found id: "5cd66489cc097137f796eb57822e7eda6b82ced4f0f5cdf2307f5a0da7fa3c43"
	I1123 11:18:18.191045  733927 cri.go:89] found id: "3c3ac16e0584a895c95fcb3ba7bb50a286a349a7d4d808b588fdbfeae8af1f72"
	I1123 11:18:18.191048  733927 cri.go:89] found id: "762418eef7f5d57e699ef90acb86c4c9536c1542ec092c57afbb3936b8bccbf0"
	I1123 11:18:18.191052  733927 cri.go:89] found id: "61200a3335e64686b202c4b4402ab443dd01b7464a2ab00988d127cf932cb937"
	I1123 11:18:18.191055  733927 cri.go:89] found id: "da30f05ba9041e558527bda7b8ad6c0615aca7408e5d54c45850e08dc7dc706d"
	I1123 11:18:18.191057  733927 cri.go:89] found id: "329ee3cb780bc0ff84833eede69619e39622914b4a5243d5aacfed9e80e40108"
	I1123 11:18:18.191108  733927 cri.go:89] found id: "4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	I1123 11:18:18.191116  733927 cri.go:89] found id: "213bd7542ea16400bbe0ca1960cd9729174df0c04ae6695ab974de746318339b"
	I1123 11:18:18.191120  733927 cri.go:89] found id: ""
	I1123 11:18:18.191183  733927 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:18:18.206189  733927 out.go:203] 
	W1123 11:18:18.209201  733927 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 11:18:18.209268  733927 out.go:285] * 
	* 
	W1123 11:18:18.217253  733927 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 11:18:18.220125  733927 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-258179 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-258179
helpers_test.go:243: (dbg) docker inspect no-preload-258179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec",
	        "Created": "2025-11-23T11:15:32.709473146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:17:15.841605486Z",
	            "FinishedAt": "2025-11-23T11:17:14.949276966Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/hosts",
	        "LogPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec-json.log",
	        "Name": "/no-preload-258179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-258179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-258179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec",
	                "LowerDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-258179",
	                "Source": "/var/lib/docker/volumes/no-preload-258179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-258179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-258179",
	                "name.minikube.sigs.k8s.io": "no-preload-258179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26a1130f505a3228728abfd94f3009da581fc137e5d49d8cdf68b08f61dd42f6",
	            "SandboxKey": "/var/run/docker/netns/26a1130f505a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-258179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:73:86:ed:2d:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "21820889903cdda52be85d36791838a2563a18a74e774bdfd134f439e013fcbd",
	                    "EndpointID": "d050ce9df0757fbcb2ea3a1d0e65d7a6ebba4d36d56be7567e4f75bba618ca12",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-258179",
	                        "e9516afbc973"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179: exit status 2 (348.112721ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-258179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-258179 logs -n 25: (1.323738816s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-700578 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-700578    │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p cert-options-700578                                                                                                                                                                                                                        │ cert-options-700578    │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:13 UTC │                     │
	│ stop    │ -p old-k8s-version-378086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-378086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629387 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:16 UTC │
	│ delete  │ -p cert-expiration-629387                                                                                                                                                                                                                     │ cert-expiration-629387 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p no-preload-258179 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:17:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:17:44.676253  731689 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:17:44.676615  731689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:17:44.676663  731689 out.go:374] Setting ErrFile to fd 2...
	I1123 11:17:44.676684  731689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:17:44.677011  731689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:17:44.677464  731689 out.go:368] Setting JSON to false
	I1123 11:17:44.678480  731689 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14414,"bootTime":1763882251,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:17:44.678590  731689 start.go:143] virtualization:  
	I1123 11:17:44.681526  731689 out.go:179] * [embed-certs-715679] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:17:44.685565  731689 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:17:44.685777  731689 notify.go:221] Checking for updates...
	I1123 11:17:44.692089  731689 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:17:44.695028  731689 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:44.698031  731689 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:17:44.700962  731689 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:17:44.703901  731689 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:17:44.707313  731689 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:44.708016  731689 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:17:44.735013  731689 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:17:44.735128  731689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:17:44.797944  731689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:17:44.788151809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:17:44.798058  731689 docker.go:319] overlay module found
	I1123 11:17:44.801206  731689 out.go:179] * Using the docker driver based on existing profile
	I1123 11:17:44.804154  731689 start.go:309] selected driver: docker
	I1123 11:17:44.804177  731689 start.go:927] validating driver "docker" against &{Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:44.804281  731689 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:17:44.805037  731689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:17:44.856300  731689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:17:44.846956837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:17:44.856645  731689 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:17:44.856680  731689 cni.go:84] Creating CNI manager for ""
	I1123 11:17:44.856741  731689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:17:44.856782  731689 start.go:353] cluster config:
	{Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:44.861656  731689 out.go:179] * Starting "embed-certs-715679" primary control-plane node in "embed-certs-715679" cluster
	I1123 11:17:44.864453  731689 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:17:44.867392  731689 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:17:44.870146  731689 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:17:44.870195  731689 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:17:44.870209  731689 cache.go:65] Caching tarball of preloaded images
	I1123 11:17:44.870213  731689 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:17:44.870295  731689 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:17:44.870307  731689 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:17:44.870424  731689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/config.json ...
	I1123 11:17:44.890854  731689 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:17:44.890875  731689 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:17:44.890895  731689 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:17:44.890929  731689 start.go:360] acquireMachinesLock for embed-certs-715679: {Name:mkb7d2190da17f9715c804089887bdf6adc5f2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:44.890998  731689 start.go:364] duration metric: took 44.424µs to acquireMachinesLock for "embed-certs-715679"
	I1123 11:17:44.891025  731689 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:17:44.891036  731689 fix.go:54] fixHost starting: 
	I1123 11:17:44.891298  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:44.908904  731689 fix.go:112] recreateIfNeeded on embed-certs-715679: state=Stopped err=<nil>
	W1123 11:17:44.908935  731689 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:17:42.403648  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:44.903552  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:17:44.912128  731689 out.go:252] * Restarting existing docker container for "embed-certs-715679" ...
	I1123 11:17:44.912204  731689 cli_runner.go:164] Run: docker start embed-certs-715679
	I1123 11:17:45.316883  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:45.336336  731689 kic.go:430] container "embed-certs-715679" state is running.
	I1123 11:17:45.336716  731689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:17:45.361310  731689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/config.json ...
	I1123 11:17:45.361610  731689 machine.go:94] provisionDockerMachine start ...
	I1123 11:17:45.361675  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:45.386296  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:45.386626  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:45.386635  731689 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:17:45.387234  731689 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42300->127.0.0.1:33817: read: connection reset by peer
	I1123 11:17:48.542186  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-715679
	
	I1123 11:17:48.542209  731689 ubuntu.go:182] provisioning hostname "embed-certs-715679"
	I1123 11:17:48.542271  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:48.563198  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:48.563544  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:48.563555  731689 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-715679 && echo "embed-certs-715679" | sudo tee /etc/hostname
	I1123 11:17:48.723199  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-715679
	
	I1123 11:17:48.723279  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:48.743209  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:48.743635  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:48.743656  731689 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-715679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-715679/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-715679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:17:48.897674  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:17:48.897702  731689 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:17:48.897725  731689 ubuntu.go:190] setting up certificates
	I1123 11:17:48.897734  731689 provision.go:84] configureAuth start
	I1123 11:17:48.897798  731689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:17:48.917446  731689 provision.go:143] copyHostCerts
	I1123 11:17:48.917517  731689 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:17:48.917537  731689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:17:48.917623  731689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:17:48.917722  731689 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:17:48.917735  731689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:17:48.917764  731689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:17:48.917832  731689 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:17:48.917840  731689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:17:48.917865  731689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:17:48.917917  731689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.embed-certs-715679 san=[127.0.0.1 192.168.76.2 embed-certs-715679 localhost minikube]
	I1123 11:17:49.029813  731689 provision.go:177] copyRemoteCerts
	I1123 11:17:49.029882  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:17:49.029936  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.051438  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:49.161240  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:17:49.179807  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:17:49.198492  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 11:17:49.217402  731689 provision.go:87] duration metric: took 319.641126ms to configureAuth
	I1123 11:17:49.217451  731689 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:17:49.217661  731689 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:49.217764  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.234657  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:49.234979  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:49.235001  731689 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:17:49.619059  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:17:49.619081  731689 machine.go:97] duration metric: took 4.257459601s to provisionDockerMachine
	I1123 11:17:49.619092  731689 start.go:293] postStartSetup for "embed-certs-715679" (driver="docker")
	I1123 11:17:49.619103  731689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:17:49.619170  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:17:49.619208  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.644182  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	W1123 11:17:46.903675  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:48.907658  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:17:49.753839  731689 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:17:49.757662  731689 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:17:49.757688  731689 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:17:49.757700  731689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:17:49.757758  731689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:17:49.757830  731689 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:17:49.757933  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:17:49.766953  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:17:49.784185  731689 start.go:296] duration metric: took 165.077067ms for postStartSetup
	I1123 11:17:49.784265  731689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:17:49.784307  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.801718  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:49.903727  731689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:17:49.908947  731689 fix.go:56] duration metric: took 5.017903947s for fixHost
	I1123 11:17:49.908975  731689 start.go:83] releasing machines lock for "embed-certs-715679", held for 5.017961327s
	I1123 11:17:49.909057  731689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:17:49.926615  731689 ssh_runner.go:195] Run: cat /version.json
	I1123 11:17:49.926671  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.926682  731689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:17:49.926737  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.946018  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:49.959258  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:50.149918  731689 ssh_runner.go:195] Run: systemctl --version
	I1123 11:17:50.156419  731689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:17:50.193765  731689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:17:50.198096  731689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:17:50.198167  731689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:17:50.205981  731689 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:17:50.206012  731689 start.go:496] detecting cgroup driver to use...
	I1123 11:17:50.206044  731689 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:17:50.206099  731689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:17:50.221626  731689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:17:50.234976  731689 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:17:50.235044  731689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:17:50.251070  731689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:17:50.264775  731689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:17:50.387726  731689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:17:50.520241  731689 docker.go:234] disabling docker service ...
	I1123 11:17:50.520351  731689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:17:50.535970  731689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:17:50.555868  731689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:17:50.683248  731689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:17:50.797111  731689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:17:50.811611  731689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:17:50.826671  731689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:17:50.826797  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.835393  731689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:17:50.835506  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.844375  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.852727  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.861873  731689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:17:50.870962  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.880462  731689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.888711  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.897123  731689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:17:50.906357  731689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:17:50.914203  731689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:51.042350  731689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:17:51.239895  731689 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:17:51.240016  731689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:17:51.243818  731689 start.go:564] Will wait 60s for crictl version
	I1123 11:17:51.243927  731689 ssh_runner.go:195] Run: which crictl
	I1123 11:17:51.247915  731689 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:17:51.272975  731689 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:17:51.273151  731689 ssh_runner.go:195] Run: crio --version
	I1123 11:17:51.303359  731689 ssh_runner.go:195] Run: crio --version
	I1123 11:17:51.336128  731689 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:17:51.339108  731689 cli_runner.go:164] Run: docker network inspect embed-certs-715679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:17:51.355299  731689 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:17:51.359373  731689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:17:51.370028  731689 kubeadm.go:884] updating cluster {Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:17:51.370154  731689 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:17:51.370210  731689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:17:51.416613  731689 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:17:51.416638  731689 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:17:51.416695  731689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:17:51.442758  731689 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:17:51.442782  731689 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:17:51.442791  731689 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:17:51.442901  731689 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-715679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:17:51.442991  731689 ssh_runner.go:195] Run: crio config
	I1123 11:17:51.526806  731689 cni.go:84] Creating CNI manager for ""
	I1123 11:17:51.526829  731689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:17:51.526876  731689 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:17:51.526909  731689 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-715679 NodeName:embed-certs-715679 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:17:51.527058  731689 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-715679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:17:51.527141  731689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:17:51.535084  731689 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:17:51.535175  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:17:51.546006  731689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 11:17:51.558621  731689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:17:51.571714  731689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 11:17:51.585931  731689 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:17:51.589335  731689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:17:51.599419  731689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:51.722152  731689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:17:51.738124  731689 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679 for IP: 192.168.76.2
	I1123 11:17:51.738149  731689 certs.go:195] generating shared ca certs ...
	I1123 11:17:51.738166  731689 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:51.738339  731689 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:17:51.738394  731689 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:17:51.738411  731689 certs.go:257] generating profile certs ...
	I1123 11:17:51.738519  731689 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.key
	I1123 11:17:51.738603  731689 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key.2c6e1eca
	I1123 11:17:51.738653  731689 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key
	I1123 11:17:51.738769  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:17:51.738803  731689 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:17:51.738820  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:17:51.738850  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:17:51.738879  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:17:51.738906  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:17:51.738962  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:17:51.739603  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:17:51.763895  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:17:51.781805  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:17:51.800228  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:17:51.826654  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 11:17:51.847062  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:17:51.867426  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:17:51.890438  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:17:51.916659  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:17:51.942591  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:17:51.968092  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:17:51.993737  731689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:17:52.009313  731689 ssh_runner.go:195] Run: openssl version
	I1123 11:17:52.022684  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:17:52.032197  731689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:52.036308  731689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:52.036468  731689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:52.082560  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:17:52.091897  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:17:52.101015  731689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:17:52.104723  731689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:17:52.104800  731689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:17:52.149006  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:17:52.157016  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:17:52.165336  731689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:17:52.169095  731689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:17:52.169162  731689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:17:52.210156  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:17:52.221733  731689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:17:52.226151  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:17:52.267036  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:17:52.308162  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:17:52.349147  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:17:52.396451  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:17:52.448200  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:17:52.511405  731689 kubeadm.go:401] StartCluster: {Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:52.511512  731689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:17:52.511595  731689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:17:52.620422  731689 cri.go:89] found id: "20df221b7dfb3ece226ab60848a3397d3f42e4fc7e2292d50c22f6f58131c199"
	I1123 11:17:52.620441  731689 cri.go:89] found id: "3705907a0fd2afd823aab9cf790cd7cbe11c78e937bd2144bafe03ce3ae8913c"
	I1123 11:17:52.620446  731689 cri.go:89] found id: "c20c209f3dc2baa15a537d778f7bcaa21c1a0e5778f19fb4930042fa54f7c132"
	I1123 11:17:52.620457  731689 cri.go:89] found id: "d4260b294228835eee6fa398c0acc73e7c5e3063b52483fb95cfd3e2c8d0cb77"
	I1123 11:17:52.620460  731689 cri.go:89] found id: ""
	I1123 11:17:52.620539  731689 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:17:52.639021  731689 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:17:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:17:52.639134  731689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:17:52.649891  731689 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:17:52.649912  731689 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:17:52.649995  731689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:17:52.666077  731689 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:17:52.666684  731689 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-715679" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:52.666965  731689 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-715679" cluster setting kubeconfig missing "embed-certs-715679" context setting]
	I1123 11:17:52.667481  731689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:52.668862  731689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:17:52.680815  731689 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 11:17:52.680848  731689 kubeadm.go:602] duration metric: took 30.928821ms to restartPrimaryControlPlane
	I1123 11:17:52.680857  731689 kubeadm.go:403] duration metric: took 169.461086ms to StartCluster
	I1123 11:17:52.680890  731689 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:52.680975  731689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:52.682372  731689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:52.682654  731689 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:17:52.682996  731689 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:52.683081  731689 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:17:52.683180  731689 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-715679"
	I1123 11:17:52.683200  731689 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-715679"
	I1123 11:17:52.683203  731689 addons.go:70] Setting dashboard=true in profile "embed-certs-715679"
	I1123 11:17:52.683236  731689 addons.go:239] Setting addon dashboard=true in "embed-certs-715679"
	W1123 11:17:52.683244  731689 addons.go:248] addon dashboard should already be in state true
	W1123 11:17:52.683207  731689 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:17:52.683270  731689 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:17:52.683283  731689 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:17:52.683748  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.683781  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.683213  731689 addons.go:70] Setting default-storageclass=true in profile "embed-certs-715679"
	I1123 11:17:52.684298  731689 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-715679"
	I1123 11:17:52.684607  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.686750  731689 out.go:179] * Verifying Kubernetes components...
	I1123 11:17:52.694532  731689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:52.778241  731689 addons.go:239] Setting addon default-storageclass=true in "embed-certs-715679"
	W1123 11:17:52.778335  731689 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:17:52.778398  731689 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:17:52.779104  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.783323  731689 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:17:52.783681  731689 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:17:52.787545  731689 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:17:52.787627  731689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:17:52.787548  731689 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:17:52.787753  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:52.793868  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:17:52.793902  731689 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:17:52.793996  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:52.835499  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:52.838182  731689 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:17:52.838199  731689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:17:52.838254  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:52.858665  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:52.879156  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:53.084029  731689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:17:53.118917  731689 node_ready.go:35] waiting up to 6m0s for node "embed-certs-715679" to be "Ready" ...
	I1123 11:17:53.139885  731689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:17:53.165909  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:17:53.165985  731689 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:17:53.193337  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:17:53.193419  731689 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:17:53.206216  731689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:17:53.251398  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:17:53.251471  731689 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:17:53.374223  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:17:53.374243  731689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:17:53.415018  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:17:53.415038  731689 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:17:53.439025  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:17:53.439094  731689 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:17:53.466477  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:17:53.466553  731689 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:17:53.496159  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:17:53.496230  731689 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:17:53.518479  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:17:53.518552  731689 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:17:53.546476  731689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 11:17:51.404065  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:53.903140  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:17:58.185945  731689 node_ready.go:49] node "embed-certs-715679" is "Ready"
	I1123 11:17:58.185989  731689 node_ready.go:38] duration metric: took 5.06697696s for node "embed-certs-715679" to be "Ready" ...
	I1123 11:17:58.186003  731689 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:17:58.186076  731689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:17:59.935035  731689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.795066805s)
	I1123 11:17:59.935112  731689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.728826594s)
	I1123 11:17:59.935464  731689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.388906603s)
	I1123 11:17:59.936136  731689 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.750046078s)
	I1123 11:17:59.936158  731689 api_server.go:72] duration metric: took 7.253470418s to wait for apiserver process to appear ...
	I1123 11:17:59.936163  731689 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:17:59.936177  731689 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:17:59.939159  731689 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-715679 addons enable metrics-server
	
	I1123 11:17:59.945779  731689 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:17:59.945852  731689 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:17:59.964906  731689 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1123 11:17:55.905755  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:58.403569  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:18:00.404147  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:18:01.904039  728764 pod_ready.go:94] pod "coredns-66bc5c9577-6xhlc" is "Ready"
	I1123 11:18:01.904072  728764 pod_ready.go:86] duration metric: took 31.00658867s for pod "coredns-66bc5c9577-6xhlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.907166  728764 pod_ready.go:83] waiting for pod "etcd-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.915010  728764 pod_ready.go:94] pod "etcd-no-preload-258179" is "Ready"
	I1123 11:18:01.915036  728764 pod_ready.go:86] duration metric: took 7.841965ms for pod "etcd-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.917455  728764 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.922279  728764 pod_ready.go:94] pod "kube-apiserver-no-preload-258179" is "Ready"
	I1123 11:18:01.922314  728764 pod_ready.go:86] duration metric: took 4.799639ms for pod "kube-apiserver-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.926918  728764 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.103113  728764 pod_ready.go:94] pod "kube-controller-manager-no-preload-258179" is "Ready"
	I1123 11:18:02.103193  728764 pod_ready.go:86] duration metric: took 176.240841ms for pod "kube-controller-manager-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.301318  728764 pod_ready.go:83] waiting for pod "kube-proxy-twzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.701373  728764 pod_ready.go:94] pod "kube-proxy-twzmv" is "Ready"
	I1123 11:18:02.701398  728764 pod_ready.go:86] duration metric: took 400.053985ms for pod "kube-proxy-twzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.901725  728764 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:03.301034  728764 pod_ready.go:94] pod "kube-scheduler-no-preload-258179" is "Ready"
	I1123 11:18:03.301065  728764 pod_ready.go:86] duration metric: took 399.314029ms for pod "kube-scheduler-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:03.301079  728764 pod_ready.go:40] duration metric: took 32.410103682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:18:03.374647  728764 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:18:03.378382  728764 out.go:179] * Done! kubectl is now configured to use "no-preload-258179" cluster and "default" namespace by default
	I1123 11:17:59.967702  731689 addons.go:530] duration metric: took 7.284629951s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 11:18:00.436526  731689 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:18:00.454391  731689 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:18:00.456499  731689 api_server.go:141] control plane version: v1.34.1
	I1123 11:18:00.456544  731689 api_server.go:131] duration metric: took 520.371018ms to wait for apiserver health ...
	I1123 11:18:00.456556  731689 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:18:00.460406  731689 system_pods.go:59] 8 kube-system pods found
	I1123 11:18:00.460450  731689 system_pods.go:61] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:18:00.460460  731689 system_pods.go:61] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:18:00.460467  731689 system_pods.go:61] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:18:00.460474  731689 system_pods.go:61] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:18:00.460481  731689 system_pods.go:61] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:18:00.460489  731689 system_pods.go:61] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:18:00.460496  731689 system_pods.go:61] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:18:00.460502  731689 system_pods.go:61] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Running
	I1123 11:18:00.460509  731689 system_pods.go:74] duration metric: took 3.946555ms to wait for pod list to return data ...
	I1123 11:18:00.460522  731689 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:18:00.464123  731689 default_sa.go:45] found service account: "default"
	I1123 11:18:00.464152  731689 default_sa.go:55] duration metric: took 3.623207ms for default service account to be created ...
	I1123 11:18:00.464163  731689 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:18:00.467902  731689 system_pods.go:86] 8 kube-system pods found
	I1123 11:18:00.467939  731689 system_pods.go:89] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:18:00.467948  731689 system_pods.go:89] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:18:00.467956  731689 system_pods.go:89] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:18:00.467963  731689 system_pods.go:89] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:18:00.467971  731689 system_pods.go:89] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:18:00.467980  731689 system_pods.go:89] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:18:00.467987  731689 system_pods.go:89] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:18:00.467994  731689 system_pods.go:89] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Running
	I1123 11:18:00.468002  731689 system_pods.go:126] duration metric: took 3.833321ms to wait for k8s-apps to be running ...
	I1123 11:18:00.468016  731689 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:18:00.468071  731689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:00.495748  731689 system_svc.go:56] duration metric: took 27.722678ms WaitForService to wait for kubelet
	I1123 11:18:00.495778  731689 kubeadm.go:587] duration metric: took 7.813088675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:18:00.495797  731689 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:18:00.501099  731689 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:18:00.501133  731689 node_conditions.go:123] node cpu capacity is 2
	I1123 11:18:00.501147  731689 node_conditions.go:105] duration metric: took 5.344317ms to run NodePressure ...
	I1123 11:18:00.501160  731689 start.go:242] waiting for startup goroutines ...
	I1123 11:18:00.501168  731689 start.go:247] waiting for cluster config update ...
	I1123 11:18:00.501183  731689 start.go:256] writing updated cluster config ...
	I1123 11:18:00.501541  731689 ssh_runner.go:195] Run: rm -f paused
	I1123 11:18:00.505555  731689 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:18:00.510025  731689 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9gghc" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 11:18:02.515213  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:04.515427  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:06.516290  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:09.015370  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:11.017096  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:13.515180  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.144719796Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=21505124-f751-4b21-adf0-e3936b2b8095 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.146530009Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2eb8967b-54d9-46b4-ba60-19103ab940f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.146839481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.240454929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.241083327Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f0b3d531f7501f3557e20a9476714db5b0d892088e5cb9ca53dd0c84aabe01be/merged/etc/passwd: no such file or directory"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.241287541Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f0b3d531f7501f3557e20a9476714db5b0d892088e5cb9ca53dd0c84aabe01be/merged/etc/group: no such file or directory"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.242382363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.288402879Z" level=info msg="Created container ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51: kube-system/storage-provisioner/storage-provisioner" id=2eb8967b-54d9-46b4-ba60-19103ab940f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.290739431Z" level=info msg="Starting container: ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51" id=a887ad87-2485-44d4-a3c1-d225151efb8d name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.293176253Z" level=info msg="Started container" PID=1632 containerID=ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51 description=kube-system/storage-provisioner/storage-provisioner id=a887ad87-2485-44d4-a3c1-d225151efb8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=db15b227a765d6cca63e5bec7530e1fac18d3518ac4393f33488b8a9c6933ef3
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.008295839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.013731956Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.013992065Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.014084006Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.02109069Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.021272307Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.021360687Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.025835196Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.026029244Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.026111765Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.030369301Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.030544887Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.03063293Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.034729699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.034957397Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ff8dda5fc0ad6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   db15b227a765d       storage-provisioner                          kube-system
	4f32fdc60532f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   dfe3b22379375       dashboard-metrics-scraper-6ffb444bf9-7mbkl   kubernetes-dashboard
	213bd7542ea16       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   35d1b4b17cba6       kubernetes-dashboard-855c9754f9-dccnq        kubernetes-dashboard
	010081a1f01c0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   d1fcfca980c9b       kindnet-zbrwj                                kube-system
	0335a26d74d9d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           49 seconds ago      Exited              storage-provisioner         1                   db15b227a765d       storage-provisioner                          kube-system
	a8805d3e95ae8       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   c741f7294336f       busybox                                      default
	5cd66489cc097       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   d000594280c51       coredns-66bc5c9577-6xhlc                     kube-system
	3c3ac16e0584a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   f4b08861523db       kube-proxy-twzmv                             kube-system
	762418eef7f5d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           55 seconds ago      Running             kube-scheduler              1                   4d26059200d35       kube-scheduler-no-preload-258179             kube-system
	61200a3335e64       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           55 seconds ago      Running             etcd                        1                   a957c1d8b5cd9       etcd-no-preload-258179                       kube-system
	da30f05ba9041       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           55 seconds ago      Running             kube-controller-manager     1                   e4c219f916756       kube-controller-manager-no-preload-258179    kube-system
	329ee3cb780bc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           55 seconds ago      Running             kube-apiserver              1                   2bc6e9ae755d6       kube-apiserver-no-preload-258179             kube-system
	
	
	==> coredns [5cd66489cc097137f796eb57822e7eda6b82ced4f0f5cdf2307f5a0da7fa3c43] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40228 - 9081 "HINFO IN 2218210772408031849.802387473050991322. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.005848998s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-258179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-258179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-258179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_16_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:16:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-258179
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:18:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-258179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                31cf968a-925d-4e78-a2a3-d0d59827b56c
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-6xhlc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-258179                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-zbrwj                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-258179              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-258179     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-twzmv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-258179              100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7mbkl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dccnq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 48s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 114s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           110s                 node-controller  Node no-preload-258179 event: Registered Node no-preload-258179 in Controller
	  Normal   NodeReady                93s                  kubelet          Node no-preload-258179 status is now: NodeReady
	  Normal   Starting                 57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                  node-controller  Node no-preload-258179 event: Registered Node no-preload-258179 in Controller
	
	
	==> dmesg <==
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [61200a3335e64686b202c4b4402ab443dd01b7464a2ab00988d127cf932cb937] <==
	{"level":"warn","ts":"2025-11-23T11:17:26.440017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.482075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.509009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.538957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.590304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.622157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.642646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.659995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.677673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.693609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.715484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.726089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.746075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.789635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.790411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.803226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.828219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.844970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.856966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.880540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.899022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.945936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.986991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:27.008828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:27.063695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39832","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:18:19 up  4:00,  0 user,  load average: 3.65, 3.59, 2.97
	Linux no-preload-258179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [010081a1f01c079a5d890d4c85e73f35bc105a15dba95abd6f350b1410ed39b1] <==
	I1123 11:17:29.796036       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:17:29.796281       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:17:29.796422       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:17:29.796433       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:17:29.796443       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:17:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:17:30.006519       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:17:30.006568       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:17:30.006581       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:17:30.058415       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:18:00.010376       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:18:00.010784       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:18:00.062247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:18:00.062475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:18:01.606866       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:18:01.606901       1 metrics.go:72] Registering metrics
	I1123 11:18:01.606975       1 controller.go:711] "Syncing nftables rules"
	I1123 11:18:10.005219       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:18:10.007097       1 main.go:301] handling current node
	
	
	==> kube-apiserver [329ee3cb780bc0ff84833eede69619e39622914b4a5243d5aacfed9e80e40108] <==
	I1123 11:17:28.116837       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 11:17:28.121540       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 11:17:28.121570       1 policy_source.go:240] refreshing policies
	I1123 11:17:28.121652       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:17:28.121660       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 11:17:28.123575       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:17:28.124364       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:17:28.171672       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:17:28.179022       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:17:28.183285       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:17:28.218082       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:17:28.224314       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:17:28.224379       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:17:28.273626       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:17:28.829435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:17:28.971827       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:17:29.067497       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:17:29.254890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:17:29.392186       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:17:29.433400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:17:29.787580       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.192.200"}
	I1123 11:17:29.862993       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.65.73"}
	I1123 11:17:32.174670       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:17:32.226098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:17:32.455164       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [da30f05ba9041e558527bda7b8ad6c0615aca7408e5d54c45850e08dc7dc706d] <==
	I1123 11:17:32.127418       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 11:17:32.149396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:17:32.152520       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 11:17:32.161526       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:17:32.161744       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:17:32.161767       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 11:17:32.161881       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:17:32.169533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:17:32.173482       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:17:32.177974       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 11:17:32.183831       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:17:32.184129       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:17:32.184576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:17:32.184224       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-258179"
	I1123 11:17:32.184686       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:17:32.187587       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 11:17:32.196558       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:17:32.196620       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:17:32.196778       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:17:32.196816       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:17:32.209728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:17:32.227408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:17:32.237556       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:17:32.237588       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:17:32.237597       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3c3ac16e0584a895c95fcb3ba7bb50a286a349a7d4d808b588fdbfeae8af1f72] <==
	I1123 11:17:30.261841       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:17:30.419291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:17:30.521504       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:17:30.521621       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:17:30.521730       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:17:30.677703       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:17:30.690214       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:17:30.732811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:17:30.733386       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:17:30.733517       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:17:30.741844       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:17:30.741920       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:17:30.742232       1 config.go:200] "Starting service config controller"
	I1123 11:17:30.742305       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:17:30.743212       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:17:30.743257       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:17:30.744141       1 config.go:309] "Starting node config controller"
	I1123 11:17:30.744191       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:17:30.744220       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:17:30.842231       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 11:17:30.842371       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:17:30.844188       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [762418eef7f5d57e699ef90acb86c4c9536c1542ec092c57afbb3936b8bccbf0] <==
	I1123 11:17:25.353164       1 serving.go:386] Generated self-signed cert in-memory
	W1123 11:17:28.063346       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 11:17:28.063398       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 11:17:28.063408       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 11:17:28.063417       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 11:17:28.190190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:17:28.190873       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:17:28.212863       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:17:28.220391       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:28.220425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:28.220445       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:17:28.322957       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: I1123 11:17:32.724491     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2hw\" (UniqueName: \"kubernetes.io/projected/8f0f01db-2f71-4e8f-9f0e-6672affa90af-kube-api-access-ml2hw\") pod \"dashboard-metrics-scraper-6ffb444bf9-7mbkl\" (UID: \"8f0f01db-2f71-4e8f-9f0e-6672affa90af\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl"
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: I1123 11:17:32.724557     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f0f01db-2f71-4e8f-9f0e-6672affa90af-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-7mbkl\" (UID: \"8f0f01db-2f71-4e8f-9f0e-6672affa90af\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl"
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: W1123 11:17:32.956583     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/crio-dfe3b223793751cce66accbd879caef4b36ea78ac239db6b0bab79643efc6264 WatchSource:0}: Error finding container dfe3b223793751cce66accbd879caef4b36ea78ac239db6b0bab79643efc6264: Status 404 returned error can't find the container with id dfe3b223793751cce66accbd879caef4b36ea78ac239db6b0bab79643efc6264
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: W1123 11:17:32.965750     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/crio-35d1b4b17cba6040c0e75d85b853c368d36d8f9a997c379a078ac58efa06170c WatchSource:0}: Error finding container 35d1b4b17cba6040c0e75d85b853c368d36d8f9a997c379a078ac58efa06170c: Status 404 returned error can't find the container with id 35d1b4b17cba6040c0e75d85b853c368d36d8f9a997c379a078ac58efa06170c
	Nov 23 11:17:37 no-preload-258179 kubelet[775]: I1123 11:17:37.069303     775 scope.go:117] "RemoveContainer" containerID="68c0e6d9458fb06aa741c140b77c3a56684862f885aafa2f4abd08d31a313a99"
	Nov 23 11:17:38 no-preload-258179 kubelet[775]: I1123 11:17:38.076322     775 scope.go:117] "RemoveContainer" containerID="68c0e6d9458fb06aa741c140b77c3a56684862f885aafa2f4abd08d31a313a99"
	Nov 23 11:17:38 no-preload-258179 kubelet[775]: I1123 11:17:38.076512     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:38 no-preload-258179 kubelet[775]: E1123 11:17:38.076696     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:17:39 no-preload-258179 kubelet[775]: I1123 11:17:39.077591     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:39 no-preload-258179 kubelet[775]: E1123 11:17:39.077746     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:17:42 no-preload-258179 kubelet[775]: I1123 11:17:42.311074     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dccnq" podStartSLOduration=2.001871442 podStartE2EDuration="10.311054538s" podCreationTimestamp="2025-11-23 11:17:32 +0000 UTC" firstStartedPulling="2025-11-23 11:17:32.968565208 +0000 UTC m=+10.344305035" lastFinishedPulling="2025-11-23 11:17:41.277748304 +0000 UTC m=+18.653488131" observedRunningTime="2025-11-23 11:17:42.112193223 +0000 UTC m=+19.487933066" watchObservedRunningTime="2025-11-23 11:17:42.311054538 +0000 UTC m=+19.686794365"
	Nov 23 11:17:42 no-preload-258179 kubelet[775]: I1123 11:17:42.934716     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:42 no-preload-258179 kubelet[775]: E1123 11:17:42.934917     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:17:54 no-preload-258179 kubelet[775]: I1123 11:17:54.871773     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:55 no-preload-258179 kubelet[775]: I1123 11:17:55.121379     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:55 no-preload-258179 kubelet[775]: I1123 11:17:55.121754     775 scope.go:117] "RemoveContainer" containerID="4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	Nov 23 11:17:55 no-preload-258179 kubelet[775]: E1123 11:17:55.121937     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:18:00 no-preload-258179 kubelet[775]: I1123 11:18:00.136693     775 scope.go:117] "RemoveContainer" containerID="0335a26d74d9d24bfc0e1369259c9a742f2b779885f8ce02463fd36d44df7ee3"
	Nov 23 11:18:02 no-preload-258179 kubelet[775]: I1123 11:18:02.934249     775 scope.go:117] "RemoveContainer" containerID="4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	Nov 23 11:18:02 no-preload-258179 kubelet[775]: E1123 11:18:02.934460     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:18:14 no-preload-258179 kubelet[775]: I1123 11:18:14.871438     775 scope.go:117] "RemoveContainer" containerID="4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	Nov 23 11:18:14 no-preload-258179 kubelet[775]: E1123 11:18:14.872076     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:18:16 no-preload-258179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:18:16 no-preload-258179 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:18:16 no-preload-258179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [213bd7542ea16400bbe0ca1960cd9729174df0c04ae6695ab974de746318339b] <==
	2025/11/23 11:17:41 Starting overwatch
	2025/11/23 11:17:41 Using namespace: kubernetes-dashboard
	2025/11/23 11:17:41 Using in-cluster config to connect to apiserver
	2025/11/23 11:17:41 Using secret token for csrf signing
	2025/11/23 11:17:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:17:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:17:41 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 11:17:41 Generating JWE encryption key
	2025/11/23 11:17:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:17:42 Initializing JWE encryption key from synchronized object
	2025/11/23 11:17:42 Creating in-cluster Sidecar client
	2025/11/23 11:17:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:17:42 Serving insecurely on HTTP port: 9090
	2025/11/23 11:18:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0335a26d74d9d24bfc0e1369259c9a742f2b779885f8ce02463fd36d44df7ee3] <==
	I1123 11:17:29.842883       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:17:59.888839       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51] <==
	I1123 11:18:00.324832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:18:00.343410       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:18:00.343469       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:18:00.350703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:03.806183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:08.067383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:11.667792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:14.721507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:17.743237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:17.748302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:17.748461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:18:17.748666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-258179_a3757c07-5283-46b8-999d-b7bc01327044!
	I1123 11:18:17.749567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ee794a4-039d-48f2-a5ae-7703aaab1a1e", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-258179_a3757c07-5283-46b8-999d-b7bc01327044 became leader
	W1123 11:18:17.754589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:17.761512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:17.848981       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-258179_a3757c07-5283-46b8-999d-b7bc01327044!
	W1123 11:18:19.764970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:19.770032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-258179 -n no-preload-258179
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-258179 -n no-preload-258179: exit status 2 (376.832387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-258179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-258179
helpers_test.go:243: (dbg) docker inspect no-preload-258179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec",
	        "Created": "2025-11-23T11:15:32.709473146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:17:15.841605486Z",
	            "FinishedAt": "2025-11-23T11:17:14.949276966Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/hosts",
	        "LogPath": "/var/lib/docker/containers/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec-json.log",
	        "Name": "/no-preload-258179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-258179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-258179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec",
	                "LowerDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd21f5bc585ce60c0f3e766e8759dc1444d3f3650962de7df183d0c14cc35d9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-258179",
	                "Source": "/var/lib/docker/volumes/no-preload-258179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-258179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-258179",
	                "name.minikube.sigs.k8s.io": "no-preload-258179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26a1130f505a3228728abfd94f3009da581fc137e5d49d8cdf68b08f61dd42f6",
	            "SandboxKey": "/var/run/docker/netns/26a1130f505a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-258179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:73:86:ed:2d:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "21820889903cdda52be85d36791838a2563a18a74e774bdfd134f439e013fcbd",
	                    "EndpointID": "d050ce9df0757fbcb2ea3a1d0e65d7a6ebba4d36d56be7567e4f75bba618ca12",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-258179",
	                        "e9516afbc973"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179: exit status 2 (377.964336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-258179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-258179 logs -n 25: (1.274948826s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-700578 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-700578    │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ delete  │ -p cert-options-700578                                                                                                                                                                                                                        │ cert-options-700578    │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:12 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:12 UTC │ 23 Nov 25 11:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-378086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:13 UTC │                     │
	│ stop    │ -p old-k8s-version-378086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-378086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:14 UTC │
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629387 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:16 UTC │
	│ delete  │ -p cert-expiration-629387                                                                                                                                                                                                                     │ cert-expiration-629387 │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p no-preload-258179 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679     │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179      │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:17:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:17:44.676253  731689 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:17:44.676615  731689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:17:44.676663  731689 out.go:374] Setting ErrFile to fd 2...
	I1123 11:17:44.676684  731689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:17:44.677011  731689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:17:44.677464  731689 out.go:368] Setting JSON to false
	I1123 11:17:44.678480  731689 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14414,"bootTime":1763882251,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:17:44.678590  731689 start.go:143] virtualization:  
	I1123 11:17:44.681526  731689 out.go:179] * [embed-certs-715679] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:17:44.685565  731689 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:17:44.685777  731689 notify.go:221] Checking for updates...
	I1123 11:17:44.692089  731689 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:17:44.695028  731689 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:44.698031  731689 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:17:44.700962  731689 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:17:44.703901  731689 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:17:44.707313  731689 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:44.708016  731689 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:17:44.735013  731689 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:17:44.735128  731689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:17:44.797944  731689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:17:44.788151809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:17:44.798058  731689 docker.go:319] overlay module found
	I1123 11:17:44.801206  731689 out.go:179] * Using the docker driver based on existing profile
	I1123 11:17:44.804154  731689 start.go:309] selected driver: docker
	I1123 11:17:44.804177  731689 start.go:927] validating driver "docker" against &{Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:44.804281  731689 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:17:44.805037  731689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:17:44.856300  731689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:17:44.846956837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:17:44.856645  731689 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:17:44.856680  731689 cni.go:84] Creating CNI manager for ""
	I1123 11:17:44.856741  731689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:17:44.856782  731689 start.go:353] cluster config:
	{Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:44.861656  731689 out.go:179] * Starting "embed-certs-715679" primary control-plane node in "embed-certs-715679" cluster
	I1123 11:17:44.864453  731689 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:17:44.867392  731689 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:17:44.870146  731689 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:17:44.870195  731689 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:17:44.870209  731689 cache.go:65] Caching tarball of preloaded images
	I1123 11:17:44.870213  731689 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:17:44.870295  731689 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:17:44.870307  731689 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:17:44.870424  731689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/config.json ...
	I1123 11:17:44.890854  731689 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:17:44.890875  731689 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:17:44.890895  731689 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:17:44.890929  731689 start.go:360] acquireMachinesLock for embed-certs-715679: {Name:mkb7d2190da17f9715c804089887bdf6adc5f2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:17:44.890998  731689 start.go:364] duration metric: took 44.424µs to acquireMachinesLock for "embed-certs-715679"
	I1123 11:17:44.891025  731689 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:17:44.891036  731689 fix.go:54] fixHost starting: 
	I1123 11:17:44.891298  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:44.908904  731689 fix.go:112] recreateIfNeeded on embed-certs-715679: state=Stopped err=<nil>
	W1123 11:17:44.908935  731689 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:17:42.403648  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:44.903552  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:17:44.912128  731689 out.go:252] * Restarting existing docker container for "embed-certs-715679" ...
	I1123 11:17:44.912204  731689 cli_runner.go:164] Run: docker start embed-certs-715679
	I1123 11:17:45.316883  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:45.336336  731689 kic.go:430] container "embed-certs-715679" state is running.
	I1123 11:17:45.336716  731689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:17:45.361310  731689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/config.json ...
	I1123 11:17:45.361610  731689 machine.go:94] provisionDockerMachine start ...
	I1123 11:17:45.361675  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:45.386296  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:45.386626  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:45.386635  731689 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:17:45.387234  731689 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42300->127.0.0.1:33817: read: connection reset by peer
	I1123 11:17:48.542186  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-715679
	
	I1123 11:17:48.542209  731689 ubuntu.go:182] provisioning hostname "embed-certs-715679"
	I1123 11:17:48.542271  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:48.563198  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:48.563544  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:48.563555  731689 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-715679 && echo "embed-certs-715679" | sudo tee /etc/hostname
	I1123 11:17:48.723199  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-715679
	
	I1123 11:17:48.723279  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:48.743209  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:48.743635  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:48.743656  731689 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-715679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-715679/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-715679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:17:48.897674  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:17:48.897702  731689 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:17:48.897725  731689 ubuntu.go:190] setting up certificates
	I1123 11:17:48.897734  731689 provision.go:84] configureAuth start
	I1123 11:17:48.897798  731689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:17:48.917446  731689 provision.go:143] copyHostCerts
	I1123 11:17:48.917517  731689 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:17:48.917537  731689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:17:48.917623  731689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:17:48.917722  731689 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:17:48.917735  731689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:17:48.917764  731689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:17:48.917832  731689 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:17:48.917840  731689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:17:48.917865  731689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:17:48.917917  731689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.embed-certs-715679 san=[127.0.0.1 192.168.76.2 embed-certs-715679 localhost minikube]
	I1123 11:17:49.029813  731689 provision.go:177] copyRemoteCerts
	I1123 11:17:49.029882  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:17:49.029936  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.051438  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:49.161240  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:17:49.179807  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:17:49.198492  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 11:17:49.217402  731689 provision.go:87] duration metric: took 319.641126ms to configureAuth
	I1123 11:17:49.217451  731689 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:17:49.217661  731689 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:49.217764  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.234657  731689 main.go:143] libmachine: Using SSH client type: native
	I1123 11:17:49.234979  731689 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1123 11:17:49.235001  731689 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:17:49.619059  731689 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:17:49.619081  731689 machine.go:97] duration metric: took 4.257459601s to provisionDockerMachine
	I1123 11:17:49.619092  731689 start.go:293] postStartSetup for "embed-certs-715679" (driver="docker")
	I1123 11:17:49.619103  731689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:17:49.619170  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:17:49.619208  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.644182  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	W1123 11:17:46.903675  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:48.907658  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:17:49.753839  731689 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:17:49.757662  731689 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:17:49.757688  731689 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:17:49.757700  731689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:17:49.757758  731689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:17:49.757830  731689 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:17:49.757933  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:17:49.766953  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:17:49.784185  731689 start.go:296] duration metric: took 165.077067ms for postStartSetup
	I1123 11:17:49.784265  731689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:17:49.784307  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.801718  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:49.903727  731689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:17:49.908947  731689 fix.go:56] duration metric: took 5.017903947s for fixHost
	I1123 11:17:49.908975  731689 start.go:83] releasing machines lock for "embed-certs-715679", held for 5.017961327s
	I1123 11:17:49.909057  731689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-715679
	I1123 11:17:49.926615  731689 ssh_runner.go:195] Run: cat /version.json
	I1123 11:17:49.926671  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.926682  731689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:17:49.926737  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:49.946018  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:49.959258  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:50.149918  731689 ssh_runner.go:195] Run: systemctl --version
	I1123 11:17:50.156419  731689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:17:50.193765  731689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:17:50.198096  731689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:17:50.198167  731689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:17:50.205981  731689 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:17:50.206012  731689 start.go:496] detecting cgroup driver to use...
	I1123 11:17:50.206044  731689 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:17:50.206099  731689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:17:50.221626  731689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:17:50.234976  731689 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:17:50.235044  731689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:17:50.251070  731689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:17:50.264775  731689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:17:50.387726  731689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:17:50.520241  731689 docker.go:234] disabling docker service ...
	I1123 11:17:50.520351  731689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:17:50.535970  731689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:17:50.555868  731689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:17:50.683248  731689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:17:50.797111  731689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:17:50.811611  731689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:17:50.826671  731689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:17:50.826797  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.835393  731689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:17:50.835506  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.844375  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.852727  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.861873  731689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:17:50.870962  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.880462  731689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.888711  731689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:17:50.897123  731689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:17:50.906357  731689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:17:50.914203  731689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:51.042350  731689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:17:51.239895  731689 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:17:51.240016  731689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:17:51.243818  731689 start.go:564] Will wait 60s for crictl version
	I1123 11:17:51.243927  731689 ssh_runner.go:195] Run: which crictl
	I1123 11:17:51.247915  731689 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:17:51.272975  731689 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:17:51.273151  731689 ssh_runner.go:195] Run: crio --version
	I1123 11:17:51.303359  731689 ssh_runner.go:195] Run: crio --version
	I1123 11:17:51.336128  731689 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:17:51.339108  731689 cli_runner.go:164] Run: docker network inspect embed-certs-715679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:17:51.355299  731689 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:17:51.359373  731689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:17:51.370028  731689 kubeadm.go:884] updating cluster {Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:17:51.370154  731689 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:17:51.370210  731689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:17:51.416613  731689 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:17:51.416638  731689 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:17:51.416695  731689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:17:51.442758  731689 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:17:51.442782  731689 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:17:51.442791  731689 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:17:51.442901  731689 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-715679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:17:51.442991  731689 ssh_runner.go:195] Run: crio config
	I1123 11:17:51.526806  731689 cni.go:84] Creating CNI manager for ""
	I1123 11:17:51.526829  731689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:17:51.526876  731689 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:17:51.526909  731689 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-715679 NodeName:embed-certs-715679 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:17:51.527058  731689 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-715679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:17:51.527141  731689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:17:51.535084  731689 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:17:51.535175  731689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:17:51.546006  731689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 11:17:51.558621  731689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:17:51.571714  731689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 11:17:51.585931  731689 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:17:51.589335  731689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:17:51.599419  731689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:51.722152  731689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:17:51.738124  731689 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679 for IP: 192.168.76.2
	I1123 11:17:51.738149  731689 certs.go:195] generating shared ca certs ...
	I1123 11:17:51.738166  731689 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:51.738339  731689 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:17:51.738394  731689 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:17:51.738411  731689 certs.go:257] generating profile certs ...
	I1123 11:17:51.738519  731689 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/client.key
	I1123 11:17:51.738603  731689 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key.2c6e1eca
	I1123 11:17:51.738653  731689 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key
	I1123 11:17:51.738769  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:17:51.738803  731689 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:17:51.738820  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:17:51.738850  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:17:51.738879  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:17:51.738906  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:17:51.738962  731689 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:17:51.739603  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:17:51.763895  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:17:51.781805  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:17:51.800228  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:17:51.826654  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 11:17:51.847062  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:17:51.867426  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:17:51.890438  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/embed-certs-715679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:17:51.916659  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:17:51.942591  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:17:51.968092  731689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:17:51.993737  731689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:17:52.009313  731689 ssh_runner.go:195] Run: openssl version
	I1123 11:17:52.022684  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:17:52.032197  731689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:52.036308  731689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:52.036468  731689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:17:52.082560  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:17:52.091897  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:17:52.101015  731689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:17:52.104723  731689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:17:52.104800  731689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:17:52.149006  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:17:52.157016  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:17:52.165336  731689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:17:52.169095  731689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:17:52.169162  731689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:17:52.210156  731689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:17:52.221733  731689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:17:52.226151  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:17:52.267036  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:17:52.308162  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:17:52.349147  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:17:52.396451  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:17:52.448200  731689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:17:52.511405  731689 kubeadm.go:401] StartCluster: {Name:embed-certs-715679 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-715679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:17:52.511512  731689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:17:52.511595  731689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:17:52.620422  731689 cri.go:89] found id: "20df221b7dfb3ece226ab60848a3397d3f42e4fc7e2292d50c22f6f58131c199"
	I1123 11:17:52.620441  731689 cri.go:89] found id: "3705907a0fd2afd823aab9cf790cd7cbe11c78e937bd2144bafe03ce3ae8913c"
	I1123 11:17:52.620446  731689 cri.go:89] found id: "c20c209f3dc2baa15a537d778f7bcaa21c1a0e5778f19fb4930042fa54f7c132"
	I1123 11:17:52.620457  731689 cri.go:89] found id: "d4260b294228835eee6fa398c0acc73e7c5e3063b52483fb95cfd3e2c8d0cb77"
	I1123 11:17:52.620460  731689 cri.go:89] found id: ""
	I1123 11:17:52.620539  731689 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:17:52.639021  731689 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:17:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:17:52.639134  731689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:17:52.649891  731689 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:17:52.649912  731689 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:17:52.649995  731689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:17:52.666077  731689 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:17:52.666684  731689 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-715679" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:52.666965  731689 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-715679" cluster setting kubeconfig missing "embed-certs-715679" context setting]
	I1123 11:17:52.667481  731689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:52.668862  731689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:17:52.680815  731689 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 11:17:52.680848  731689 kubeadm.go:602] duration metric: took 30.928821ms to restartPrimaryControlPlane
	I1123 11:17:52.680857  731689 kubeadm.go:403] duration metric: took 169.461086ms to StartCluster
	I1123 11:17:52.680890  731689 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:52.680975  731689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:17:52.682372  731689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:17:52.682654  731689 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:17:52.682996  731689 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:17:52.683081  731689 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:17:52.683180  731689 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-715679"
	I1123 11:17:52.683200  731689 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-715679"
	I1123 11:17:52.683203  731689 addons.go:70] Setting dashboard=true in profile "embed-certs-715679"
	I1123 11:17:52.683236  731689 addons.go:239] Setting addon dashboard=true in "embed-certs-715679"
	W1123 11:17:52.683244  731689 addons.go:248] addon dashboard should already be in state true
	W1123 11:17:52.683207  731689 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:17:52.683270  731689 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:17:52.683283  731689 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:17:52.683748  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.683781  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.683213  731689 addons.go:70] Setting default-storageclass=true in profile "embed-certs-715679"
	I1123 11:17:52.684298  731689 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-715679"
	I1123 11:17:52.684607  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.686750  731689 out.go:179] * Verifying Kubernetes components...
	I1123 11:17:52.694532  731689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:17:52.778241  731689 addons.go:239] Setting addon default-storageclass=true in "embed-certs-715679"
	W1123 11:17:52.778335  731689 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:17:52.778398  731689 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:17:52.779104  731689 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:17:52.783323  731689 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:17:52.783681  731689 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:17:52.787545  731689 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:17:52.787627  731689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:17:52.787548  731689 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:17:52.787753  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:52.793868  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:17:52.793902  731689 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:17:52.793996  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:52.835499  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:52.838182  731689 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:17:52.838199  731689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:17:52.838254  731689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:17:52.858665  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:52.879156  731689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:17:53.084029  731689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:17:53.118917  731689 node_ready.go:35] waiting up to 6m0s for node "embed-certs-715679" to be "Ready" ...
	I1123 11:17:53.139885  731689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:17:53.165909  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:17:53.165985  731689 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:17:53.193337  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:17:53.193419  731689 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:17:53.206216  731689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:17:53.251398  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:17:53.251471  731689 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:17:53.374223  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:17:53.374243  731689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:17:53.415018  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:17:53.415038  731689 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:17:53.439025  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:17:53.439094  731689 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:17:53.466477  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:17:53.466553  731689 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:17:53.496159  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:17:53.496230  731689 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:17:53.518479  731689 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:17:53.518552  731689 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:17:53.546476  731689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 11:17:51.404065  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:53.903140  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:17:58.185945  731689 node_ready.go:49] node "embed-certs-715679" is "Ready"
	I1123 11:17:58.185989  731689 node_ready.go:38] duration metric: took 5.06697696s for node "embed-certs-715679" to be "Ready" ...
	I1123 11:17:58.186003  731689 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:17:58.186076  731689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:17:59.935035  731689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.795066805s)
	I1123 11:17:59.935112  731689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.728826594s)
	I1123 11:17:59.935464  731689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.388906603s)
	I1123 11:17:59.936136  731689 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.750046078s)
	I1123 11:17:59.936158  731689 api_server.go:72] duration metric: took 7.253470418s to wait for apiserver process to appear ...
	I1123 11:17:59.936163  731689 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:17:59.936177  731689 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:17:59.939159  731689 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-715679 addons enable metrics-server
	
	I1123 11:17:59.945779  731689 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:17:59.945852  731689 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:17:59.964906  731689 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1123 11:17:55.905755  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:17:58.403569  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	W1123 11:18:00.404147  728764 pod_ready.go:104] pod "coredns-66bc5c9577-6xhlc" is not "Ready", error: <nil>
	I1123 11:18:01.904039  728764 pod_ready.go:94] pod "coredns-66bc5c9577-6xhlc" is "Ready"
	I1123 11:18:01.904072  728764 pod_ready.go:86] duration metric: took 31.00658867s for pod "coredns-66bc5c9577-6xhlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.907166  728764 pod_ready.go:83] waiting for pod "etcd-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.915010  728764 pod_ready.go:94] pod "etcd-no-preload-258179" is "Ready"
	I1123 11:18:01.915036  728764 pod_ready.go:86] duration metric: took 7.841965ms for pod "etcd-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.917455  728764 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.922279  728764 pod_ready.go:94] pod "kube-apiserver-no-preload-258179" is "Ready"
	I1123 11:18:01.922314  728764 pod_ready.go:86] duration metric: took 4.799639ms for pod "kube-apiserver-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:01.926918  728764 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.103113  728764 pod_ready.go:94] pod "kube-controller-manager-no-preload-258179" is "Ready"
	I1123 11:18:02.103193  728764 pod_ready.go:86] duration metric: took 176.240841ms for pod "kube-controller-manager-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.301318  728764 pod_ready.go:83] waiting for pod "kube-proxy-twzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.701373  728764 pod_ready.go:94] pod "kube-proxy-twzmv" is "Ready"
	I1123 11:18:02.701398  728764 pod_ready.go:86] duration metric: took 400.053985ms for pod "kube-proxy-twzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:02.901725  728764 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:03.301034  728764 pod_ready.go:94] pod "kube-scheduler-no-preload-258179" is "Ready"
	I1123 11:18:03.301065  728764 pod_ready.go:86] duration metric: took 399.314029ms for pod "kube-scheduler-no-preload-258179" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:03.301079  728764 pod_ready.go:40] duration metric: took 32.410103682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:18:03.374647  728764 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:18:03.378382  728764 out.go:179] * Done! kubectl is now configured to use "no-preload-258179" cluster and "default" namespace by default
	I1123 11:17:59.967702  731689 addons.go:530] duration metric: took 7.284629951s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 11:18:00.436526  731689 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:18:00.454391  731689 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:18:00.456499  731689 api_server.go:141] control plane version: v1.34.1
	I1123 11:18:00.456544  731689 api_server.go:131] duration metric: took 520.371018ms to wait for apiserver health ...
	I1123 11:18:00.456556  731689 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:18:00.460406  731689 system_pods.go:59] 8 kube-system pods found
	I1123 11:18:00.460450  731689 system_pods.go:61] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:18:00.460460  731689 system_pods.go:61] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:18:00.460467  731689 system_pods.go:61] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:18:00.460474  731689 system_pods.go:61] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:18:00.460481  731689 system_pods.go:61] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:18:00.460489  731689 system_pods.go:61] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:18:00.460496  731689 system_pods.go:61] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:18:00.460502  731689 system_pods.go:61] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Running
	I1123 11:18:00.460509  731689 system_pods.go:74] duration metric: took 3.946555ms to wait for pod list to return data ...
	I1123 11:18:00.460522  731689 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:18:00.464123  731689 default_sa.go:45] found service account: "default"
	I1123 11:18:00.464152  731689 default_sa.go:55] duration metric: took 3.623207ms for default service account to be created ...
	I1123 11:18:00.464163  731689 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:18:00.467902  731689 system_pods.go:86] 8 kube-system pods found
	I1123 11:18:00.467939  731689 system_pods.go:89] "coredns-66bc5c9577-9gghc" [d99a3e5e-e56b-48b0-8413-324ec3f36f2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:18:00.467948  731689 system_pods.go:89] "etcd-embed-certs-715679" [5fc21e7a-a77b-492b-8810-45e676bbfda6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:18:00.467956  731689 system_pods.go:89] "kindnet-gh5h2" [f553ae5d-e205-4c1e-8075-3a9746cb32da] Running
	I1123 11:18:00.467963  731689 system_pods.go:89] "kube-apiserver-embed-certs-715679" [5ddac975-5998-43f9-8c96-4d5a0bf25d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:18:00.467971  731689 system_pods.go:89] "kube-controller-manager-embed-certs-715679" [e1e67f73-c2ea-4159-ae82-a3c5878a0486] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:18:00.467980  731689 system_pods.go:89] "kube-proxy-84tx6" [904f9b00-4ea3-4184-b263-d052bb538d98] Running
	I1123 11:18:00.467987  731689 system_pods.go:89] "kube-scheduler-embed-certs-715679" [eec56d4d-ad40-4915-9e74-60015f9ec455] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:18:00.467994  731689 system_pods.go:89] "storage-provisioner" [fef3a639-c516-41e3-a3d5-c7a49af7cd71] Running
	I1123 11:18:00.468002  731689 system_pods.go:126] duration metric: took 3.833321ms to wait for k8s-apps to be running ...
	I1123 11:18:00.468016  731689 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:18:00.468071  731689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:00.495748  731689 system_svc.go:56] duration metric: took 27.722678ms WaitForService to wait for kubelet
	I1123 11:18:00.495778  731689 kubeadm.go:587] duration metric: took 7.813088675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:18:00.495797  731689 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:18:00.501099  731689 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:18:00.501133  731689 node_conditions.go:123] node cpu capacity is 2
	I1123 11:18:00.501147  731689 node_conditions.go:105] duration metric: took 5.344317ms to run NodePressure ...
	I1123 11:18:00.501160  731689 start.go:242] waiting for startup goroutines ...
	I1123 11:18:00.501168  731689 start.go:247] waiting for cluster config update ...
	I1123 11:18:00.501183  731689 start.go:256] writing updated cluster config ...
	I1123 11:18:00.501541  731689 ssh_runner.go:195] Run: rm -f paused
	I1123 11:18:00.505555  731689 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:18:00.510025  731689 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9gghc" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 11:18:02.515213  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:04.515427  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:06.516290  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:09.015370  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:11.017096  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:13.515180  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:15.515619  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:18.016951  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.144719796Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=21505124-f751-4b21-adf0-e3936b2b8095 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.146530009Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2eb8967b-54d9-46b4-ba60-19103ab940f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.146839481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.240454929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.241083327Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f0b3d531f7501f3557e20a9476714db5b0d892088e5cb9ca53dd0c84aabe01be/merged/etc/passwd: no such file or directory"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.241287541Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f0b3d531f7501f3557e20a9476714db5b0d892088e5cb9ca53dd0c84aabe01be/merged/etc/group: no such file or directory"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.242382363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.288402879Z" level=info msg="Created container ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51: kube-system/storage-provisioner/storage-provisioner" id=2eb8967b-54d9-46b4-ba60-19103ab940f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.290739431Z" level=info msg="Starting container: ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51" id=a887ad87-2485-44d4-a3c1-d225151efb8d name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:18:00 no-preload-258179 crio[654]: time="2025-11-23T11:18:00.293176253Z" level=info msg="Started container" PID=1632 containerID=ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51 description=kube-system/storage-provisioner/storage-provisioner id=a887ad87-2485-44d4-a3c1-d225151efb8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=db15b227a765d6cca63e5bec7530e1fac18d3518ac4393f33488b8a9c6933ef3
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.008295839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.013731956Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.013992065Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.014084006Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.02109069Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.021272307Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.021360687Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.025835196Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.026029244Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.026111765Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.030369301Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.030544887Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.03063293Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.034729699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:10 no-preload-258179 crio[654]: time="2025-11-23T11:18:10.034957397Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ff8dda5fc0ad6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   db15b227a765d       storage-provisioner                          kube-system
	4f32fdc60532f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   dfe3b22379375       dashboard-metrics-scraper-6ffb444bf9-7mbkl   kubernetes-dashboard
	213bd7542ea16       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   35d1b4b17cba6       kubernetes-dashboard-855c9754f9-dccnq        kubernetes-dashboard
	010081a1f01c0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   d1fcfca980c9b       kindnet-zbrwj                                kube-system
	0335a26d74d9d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   db15b227a765d       storage-provisioner                          kube-system
	a8805d3e95ae8       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   c741f7294336f       busybox                                      default
	5cd66489cc097       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   d000594280c51       coredns-66bc5c9577-6xhlc                     kube-system
	3c3ac16e0584a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   f4b08861523db       kube-proxy-twzmv                             kube-system
	762418eef7f5d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   4d26059200d35       kube-scheduler-no-preload-258179             kube-system
	61200a3335e64       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   a957c1d8b5cd9       etcd-no-preload-258179                       kube-system
	da30f05ba9041       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   e4c219f916756       kube-controller-manager-no-preload-258179    kube-system
	329ee3cb780bc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   2bc6e9ae755d6       kube-apiserver-no-preload-258179             kube-system
	
	
	==> coredns [5cd66489cc097137f796eb57822e7eda6b82ced4f0f5cdf2307f5a0da7fa3c43] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40228 - 9081 "HINFO IN 2218210772408031849.802387473050991322. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.005848998s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-258179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-258179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-258179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_16_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:16:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-258179
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:18:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:17:59 +0000   Sun, 23 Nov 2025 11:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-258179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                31cf968a-925d-4e78-a2a3-d0d59827b56c
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-6xhlc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-258179                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-zbrwj                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-258179              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-258179     200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-twzmv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-258179              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7mbkl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dccnq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           112s                 node-controller  Node no-preload-258179 event: Registered Node no-preload-258179 in Controller
	  Normal   NodeReady                95s                  kubelet          Node no-preload-258179 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-258179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-258179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-258179 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                  node-controller  Node no-preload-258179 event: Registered Node no-preload-258179 in Controller
	
	
	==> dmesg <==
	[Nov23 10:57] overlayfs: idmapped layers are currently not supported
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [61200a3335e64686b202c4b4402ab443dd01b7464a2ab00988d127cf932cb937] <==
	{"level":"warn","ts":"2025-11-23T11:17:26.440017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.482075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.509009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.538957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.590304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.622157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.642646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.659995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.677673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.693609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.715484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.726089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.746075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.789635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.790411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.803226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.828219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.844970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.856966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.880540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.899022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.945936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:26.986991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:27.008828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:27.063695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39832","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:18:21 up  4:00,  0 user,  load average: 3.65, 3.59, 2.97
	Linux no-preload-258179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [010081a1f01c079a5d890d4c85e73f35bc105a15dba95abd6f350b1410ed39b1] <==
	I1123 11:17:29.796036       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:17:29.796281       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:17:29.796422       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:17:29.796433       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:17:29.796443       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:17:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:17:30.006519       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:17:30.006568       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:17:30.006581       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:17:30.058415       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:18:00.010376       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:18:00.010784       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:18:00.062247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:18:00.062475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:18:01.606866       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:18:01.606901       1 metrics.go:72] Registering metrics
	I1123 11:18:01.606975       1 controller.go:711] "Syncing nftables rules"
	I1123 11:18:10.005219       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:18:10.007097       1 main.go:301] handling current node
	I1123 11:18:20.007613       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:18:20.007663       1 main.go:301] handling current node
	
	
	==> kube-apiserver [329ee3cb780bc0ff84833eede69619e39622914b4a5243d5aacfed9e80e40108] <==
	I1123 11:17:28.116837       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 11:17:28.121540       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 11:17:28.121570       1 policy_source.go:240] refreshing policies
	I1123 11:17:28.121652       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:17:28.121660       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 11:17:28.123575       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:17:28.124364       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:17:28.171672       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:17:28.179022       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:17:28.183285       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:17:28.218082       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:17:28.224314       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:17:28.224379       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:17:28.273626       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:17:28.829435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:17:28.971827       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:17:29.067497       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:17:29.254890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:17:29.392186       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:17:29.433400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:17:29.787580       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.192.200"}
	I1123 11:17:29.862993       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.65.73"}
	I1123 11:17:32.174670       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:17:32.226098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:17:32.455164       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [da30f05ba9041e558527bda7b8ad6c0615aca7408e5d54c45850e08dc7dc706d] <==
	I1123 11:17:32.127418       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 11:17:32.149396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:17:32.152520       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 11:17:32.161526       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:17:32.161744       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:17:32.161767       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 11:17:32.161881       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:17:32.169533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:17:32.173482       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:17:32.177974       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 11:17:32.183831       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:17:32.184129       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:17:32.184576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:17:32.184224       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-258179"
	I1123 11:17:32.184686       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:17:32.187587       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 11:17:32.196558       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:17:32.196620       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:17:32.196778       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:17:32.196816       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:17:32.209728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:17:32.227408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:17:32.237556       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:17:32.237588       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:17:32.237597       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3c3ac16e0584a895c95fcb3ba7bb50a286a349a7d4d808b588fdbfeae8af1f72] <==
	I1123 11:17:30.261841       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:17:30.419291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:17:30.521504       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:17:30.521621       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:17:30.521730       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:17:30.677703       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:17:30.690214       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:17:30.732811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:17:30.733386       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:17:30.733517       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:17:30.741844       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:17:30.741920       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:17:30.742232       1 config.go:200] "Starting service config controller"
	I1123 11:17:30.742305       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:17:30.743212       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:17:30.743257       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:17:30.744141       1 config.go:309] "Starting node config controller"
	I1123 11:17:30.744191       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:17:30.744220       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:17:30.842231       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 11:17:30.842371       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:17:30.844188       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [762418eef7f5d57e699ef90acb86c4c9536c1542ec092c57afbb3936b8bccbf0] <==
	I1123 11:17:25.353164       1 serving.go:386] Generated self-signed cert in-memory
	W1123 11:17:28.063346       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 11:17:28.063398       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 11:17:28.063408       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 11:17:28.063417       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 11:17:28.190190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:17:28.190873       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:17:28.212863       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:17:28.220391       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:28.220425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:28.220445       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:17:28.322957       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: I1123 11:17:32.724491     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2hw\" (UniqueName: \"kubernetes.io/projected/8f0f01db-2f71-4e8f-9f0e-6672affa90af-kube-api-access-ml2hw\") pod \"dashboard-metrics-scraper-6ffb444bf9-7mbkl\" (UID: \"8f0f01db-2f71-4e8f-9f0e-6672affa90af\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl"
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: I1123 11:17:32.724557     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f0f01db-2f71-4e8f-9f0e-6672affa90af-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-7mbkl\" (UID: \"8f0f01db-2f71-4e8f-9f0e-6672affa90af\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl"
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: W1123 11:17:32.956583     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/crio-dfe3b223793751cce66accbd879caef4b36ea78ac239db6b0bab79643efc6264 WatchSource:0}: Error finding container dfe3b223793751cce66accbd879caef4b36ea78ac239db6b0bab79643efc6264: Status 404 returned error can't find the container with id dfe3b223793751cce66accbd879caef4b36ea78ac239db6b0bab79643efc6264
	Nov 23 11:17:32 no-preload-258179 kubelet[775]: W1123 11:17:32.965750     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e9516afbc9736e0046c84e45e2eb9cb652c5febbf93acfec76e0e86a1dd139ec/crio-35d1b4b17cba6040c0e75d85b853c368d36d8f9a997c379a078ac58efa06170c WatchSource:0}: Error finding container 35d1b4b17cba6040c0e75d85b853c368d36d8f9a997c379a078ac58efa06170c: Status 404 returned error can't find the container with id 35d1b4b17cba6040c0e75d85b853c368d36d8f9a997c379a078ac58efa06170c
	Nov 23 11:17:37 no-preload-258179 kubelet[775]: I1123 11:17:37.069303     775 scope.go:117] "RemoveContainer" containerID="68c0e6d9458fb06aa741c140b77c3a56684862f885aafa2f4abd08d31a313a99"
	Nov 23 11:17:38 no-preload-258179 kubelet[775]: I1123 11:17:38.076322     775 scope.go:117] "RemoveContainer" containerID="68c0e6d9458fb06aa741c140b77c3a56684862f885aafa2f4abd08d31a313a99"
	Nov 23 11:17:38 no-preload-258179 kubelet[775]: I1123 11:17:38.076512     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:38 no-preload-258179 kubelet[775]: E1123 11:17:38.076696     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:17:39 no-preload-258179 kubelet[775]: I1123 11:17:39.077591     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:39 no-preload-258179 kubelet[775]: E1123 11:17:39.077746     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:17:42 no-preload-258179 kubelet[775]: I1123 11:17:42.311074     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dccnq" podStartSLOduration=2.001871442 podStartE2EDuration="10.311054538s" podCreationTimestamp="2025-11-23 11:17:32 +0000 UTC" firstStartedPulling="2025-11-23 11:17:32.968565208 +0000 UTC m=+10.344305035" lastFinishedPulling="2025-11-23 11:17:41.277748304 +0000 UTC m=+18.653488131" observedRunningTime="2025-11-23 11:17:42.112193223 +0000 UTC m=+19.487933066" watchObservedRunningTime="2025-11-23 11:17:42.311054538 +0000 UTC m=+19.686794365"
	Nov 23 11:17:42 no-preload-258179 kubelet[775]: I1123 11:17:42.934716     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:42 no-preload-258179 kubelet[775]: E1123 11:17:42.934917     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:17:54 no-preload-258179 kubelet[775]: I1123 11:17:54.871773     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:55 no-preload-258179 kubelet[775]: I1123 11:17:55.121379     775 scope.go:117] "RemoveContainer" containerID="5aec16a3b785388a76458e420a735ec32c041c94d3572935b87f0fde168611b2"
	Nov 23 11:17:55 no-preload-258179 kubelet[775]: I1123 11:17:55.121754     775 scope.go:117] "RemoveContainer" containerID="4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	Nov 23 11:17:55 no-preload-258179 kubelet[775]: E1123 11:17:55.121937     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:18:00 no-preload-258179 kubelet[775]: I1123 11:18:00.136693     775 scope.go:117] "RemoveContainer" containerID="0335a26d74d9d24bfc0e1369259c9a742f2b779885f8ce02463fd36d44df7ee3"
	Nov 23 11:18:02 no-preload-258179 kubelet[775]: I1123 11:18:02.934249     775 scope.go:117] "RemoveContainer" containerID="4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	Nov 23 11:18:02 no-preload-258179 kubelet[775]: E1123 11:18:02.934460     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:18:14 no-preload-258179 kubelet[775]: I1123 11:18:14.871438     775 scope.go:117] "RemoveContainer" containerID="4f32fdc60532fa22ce70adb83f1bc3f9a498d2f859f0f3661b209a4eb7f7b4f5"
	Nov 23 11:18:14 no-preload-258179 kubelet[775]: E1123 11:18:14.872076     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7mbkl_kubernetes-dashboard(8f0f01db-2f71-4e8f-9f0e-6672affa90af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7mbkl" podUID="8f0f01db-2f71-4e8f-9f0e-6672affa90af"
	Nov 23 11:18:16 no-preload-258179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:18:16 no-preload-258179 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:18:16 no-preload-258179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [213bd7542ea16400bbe0ca1960cd9729174df0c04ae6695ab974de746318339b] <==
	2025/11/23 11:17:41 Using namespace: kubernetes-dashboard
	2025/11/23 11:17:41 Using in-cluster config to connect to apiserver
	2025/11/23 11:17:41 Using secret token for csrf signing
	2025/11/23 11:17:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:17:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:17:41 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 11:17:41 Generating JWE encryption key
	2025/11/23 11:17:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:17:42 Initializing JWE encryption key from synchronized object
	2025/11/23 11:17:42 Creating in-cluster Sidecar client
	2025/11/23 11:17:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:17:42 Serving insecurely on HTTP port: 9090
	2025/11/23 11:18:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:17:41 Starting overwatch
	
	
	==> storage-provisioner [0335a26d74d9d24bfc0e1369259c9a742f2b779885f8ce02463fd36d44df7ee3] <==
	I1123 11:17:29.842883       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:17:59.888839       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff8dda5fc0ad62f8d567c86c4fadc33462d5c24e65284650dd95f184b42a2c51] <==
	I1123 11:18:00.324832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:18:00.343410       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:18:00.343469       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:18:00.350703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:03.806183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:08.067383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:11.667792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:14.721507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:17.743237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:17.748302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:17.748461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:18:17.748666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-258179_a3757c07-5283-46b8-999d-b7bc01327044!
	I1123 11:18:17.749567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ee794a4-039d-48f2-a5ae-7703aaab1a1e", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-258179_a3757c07-5283-46b8-999d-b7bc01327044 became leader
	W1123 11:18:17.754589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:17.761512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:17.848981       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-258179_a3757c07-5283-46b8-999d-b7bc01327044!
	W1123 11:18:19.764970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:19.770032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:21.774873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:21.780258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-258179 -n no-preload-258179
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-258179 -n no-preload-258179: exit status 2 (381.764399ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-258179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-715679 --alsologtostderr -v=1
E1123 11:18:52.478031  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-715679 --alsologtostderr -v=1: exit status 80 (2.274396369s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-715679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:18:52.062932  737488 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:18:52.063071  737488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:52.063077  737488 out.go:374] Setting ErrFile to fd 2...
	I1123 11:18:52.063082  737488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:52.063369  737488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:18:52.063679  737488 out.go:368] Setting JSON to false
	I1123 11:18:52.063695  737488 mustload.go:66] Loading cluster: embed-certs-715679
	I1123 11:18:52.064114  737488 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:18:52.064580  737488 cli_runner.go:164] Run: docker container inspect embed-certs-715679 --format={{.State.Status}}
	I1123 11:18:52.093892  737488 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:18:52.094232  737488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:18:52.205289  737488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-23 11:18:52.193951875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:18:52.205989  737488 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-715679 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 11:18:52.210572  737488 out.go:179] * Pausing node embed-certs-715679 ... 
	I1123 11:18:52.214316  737488 host.go:66] Checking if "embed-certs-715679" exists ...
	I1123 11:18:52.214634  737488 ssh_runner.go:195] Run: systemctl --version
	I1123 11:18:52.214674  737488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-715679
	I1123 11:18:52.242078  737488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/embed-certs-715679/id_rsa Username:docker}
	I1123 11:18:52.368417  737488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:52.393612  737488 pause.go:52] kubelet running: true
	I1123 11:18:52.393685  737488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:18:52.772912  737488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:18:52.772995  737488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:18:52.919353  737488 cri.go:89] found id: "0635b3b4249e89f567cbfcf4fca7e7c36f6918fc08b8db8d3517ee5cc414b46a"
	I1123 11:18:52.919411  737488 cri.go:89] found id: "6d43e5477c8354b480be323d501bde9ccdf2ce5fb0a610110f36cc963145e4b4"
	I1123 11:18:52.919433  737488 cri.go:89] found id: "fa13ac96e1521657e764697d7ba6ea5ca642fe85f9ffe908b95e26442c09866b"
	I1123 11:18:52.919454  737488 cri.go:89] found id: "2e9c10cadc1c93a0579863766c9dd59aaf1ebf2733e6a3127e1e121114213768"
	I1123 11:18:52.919475  737488 cri.go:89] found id: "75d7b06e8aa7dcd731688456f75103f5b70f9d0a304f7bc68eb282728b5c6cd5"
	I1123 11:18:52.919495  737488 cri.go:89] found id: "20df221b7dfb3ece226ab60848a3397d3f42e4fc7e2292d50c22f6f58131c199"
	I1123 11:18:52.919514  737488 cri.go:89] found id: "3705907a0fd2afd823aab9cf790cd7cbe11c78e937bd2144bafe03ce3ae8913c"
	I1123 11:18:52.919535  737488 cri.go:89] found id: "c20c209f3dc2baa15a537d778f7bcaa21c1a0e5778f19fb4930042fa54f7c132"
	I1123 11:18:52.919554  737488 cri.go:89] found id: "d4260b294228835eee6fa398c0acc73e7c5e3063b52483fb95cfd3e2c8d0cb77"
	I1123 11:18:52.919576  737488 cri.go:89] found id: "c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986"
	I1123 11:18:52.919595  737488 cri.go:89] found id: "2cf450fb7ea4ad6a81a7878a6098c4aab3262b246b1d1326e5ce26be1e08beba"
	I1123 11:18:52.919615  737488 cri.go:89] found id: ""
	I1123 11:18:52.919690  737488 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:18:52.940187  737488 retry.go:31] will retry after 191.177263ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:18:53.131574  737488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:53.153902  737488 pause.go:52] kubelet running: false
	I1123 11:18:53.154017  737488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:18:53.449237  737488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:18:53.449393  737488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:18:53.577587  737488 cri.go:89] found id: "0635b3b4249e89f567cbfcf4fca7e7c36f6918fc08b8db8d3517ee5cc414b46a"
	I1123 11:18:53.577618  737488 cri.go:89] found id: "6d43e5477c8354b480be323d501bde9ccdf2ce5fb0a610110f36cc963145e4b4"
	I1123 11:18:53.577623  737488 cri.go:89] found id: "fa13ac96e1521657e764697d7ba6ea5ca642fe85f9ffe908b95e26442c09866b"
	I1123 11:18:53.577627  737488 cri.go:89] found id: "2e9c10cadc1c93a0579863766c9dd59aaf1ebf2733e6a3127e1e121114213768"
	I1123 11:18:53.577631  737488 cri.go:89] found id: "75d7b06e8aa7dcd731688456f75103f5b70f9d0a304f7bc68eb282728b5c6cd5"
	I1123 11:18:53.577635  737488 cri.go:89] found id: "20df221b7dfb3ece226ab60848a3397d3f42e4fc7e2292d50c22f6f58131c199"
	I1123 11:18:53.577638  737488 cri.go:89] found id: "3705907a0fd2afd823aab9cf790cd7cbe11c78e937bd2144bafe03ce3ae8913c"
	I1123 11:18:53.577641  737488 cri.go:89] found id: "c20c209f3dc2baa15a537d778f7bcaa21c1a0e5778f19fb4930042fa54f7c132"
	I1123 11:18:53.577644  737488 cri.go:89] found id: "d4260b294228835eee6fa398c0acc73e7c5e3063b52483fb95cfd3e2c8d0cb77"
	I1123 11:18:53.577651  737488 cri.go:89] found id: "c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986"
	I1123 11:18:53.577654  737488 cri.go:89] found id: "2cf450fb7ea4ad6a81a7878a6098c4aab3262b246b1d1326e5ce26be1e08beba"
	I1123 11:18:53.577657  737488 cri.go:89] found id: ""
	I1123 11:18:53.577725  737488 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:18:53.590979  737488 retry.go:31] will retry after 188.764227ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:53Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:18:53.780411  737488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:18:53.801112  737488 pause.go:52] kubelet running: false
	I1123 11:18:53.801238  737488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:18:54.051118  737488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:18:54.051193  737488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:18:54.184356  737488 cri.go:89] found id: "0635b3b4249e89f567cbfcf4fca7e7c36f6918fc08b8db8d3517ee5cc414b46a"
	I1123 11:18:54.184383  737488 cri.go:89] found id: "6d43e5477c8354b480be323d501bde9ccdf2ce5fb0a610110f36cc963145e4b4"
	I1123 11:18:54.184388  737488 cri.go:89] found id: "fa13ac96e1521657e764697d7ba6ea5ca642fe85f9ffe908b95e26442c09866b"
	I1123 11:18:54.184392  737488 cri.go:89] found id: "2e9c10cadc1c93a0579863766c9dd59aaf1ebf2733e6a3127e1e121114213768"
	I1123 11:18:54.184399  737488 cri.go:89] found id: "75d7b06e8aa7dcd731688456f75103f5b70f9d0a304f7bc68eb282728b5c6cd5"
	I1123 11:18:54.184428  737488 cri.go:89] found id: "20df221b7dfb3ece226ab60848a3397d3f42e4fc7e2292d50c22f6f58131c199"
	I1123 11:18:54.184438  737488 cri.go:89] found id: "3705907a0fd2afd823aab9cf790cd7cbe11c78e937bd2144bafe03ce3ae8913c"
	I1123 11:18:54.184446  737488 cri.go:89] found id: "c20c209f3dc2baa15a537d778f7bcaa21c1a0e5778f19fb4930042fa54f7c132"
	I1123 11:18:54.184450  737488 cri.go:89] found id: "d4260b294228835eee6fa398c0acc73e7c5e3063b52483fb95cfd3e2c8d0cb77"
	I1123 11:18:54.184460  737488 cri.go:89] found id: "c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986"
	I1123 11:18:54.184468  737488 cri.go:89] found id: "2cf450fb7ea4ad6a81a7878a6098c4aab3262b246b1d1326e5ce26be1e08beba"
	I1123 11:18:54.184471  737488 cri.go:89] found id: ""
	I1123 11:18:54.184548  737488 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:18:54.206695  737488 out.go:203] 
	W1123 11:18:54.209685  737488 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:18:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 11:18:54.209708  737488 out.go:285] * 
	* 
	W1123 11:18:54.218459  737488 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 11:18:54.221542  737488 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-715679 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-715679
helpers_test.go:243: (dbg) docker inspect embed-certs-715679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944",
	        "Created": "2025-11-23T11:15:57.805460889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731825,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:17:44.944833739Z",
	            "FinishedAt": "2025-11-23T11:17:43.917904612Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/hosts",
	        "LogPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944-json.log",
	        "Name": "/embed-certs-715679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-715679:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-715679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944",
	                "LowerDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-715679",
	                "Source": "/var/lib/docker/volumes/embed-certs-715679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-715679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-715679",
	                "name.minikube.sigs.k8s.io": "embed-certs-715679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "123a25548df68610e38fab9bc466be27653489cae74348f0815b597a21ebf459",
	            "SandboxKey": "/var/run/docker/netns/123a25548df6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-715679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:bc:58:a6:86:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9dc6254b6af11e97f0c613269fd92518cae572b3a5313c8e4edd68d21062116b",
	                    "EndpointID": "369cb5d862fe089efebfffcdfec0a43b6f83f043c1a8922f3cc442fe856c38d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-715679",
	                        "bf3b5a2f915e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679: exit status 2 (483.522482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-715679 logs -n 25
E1123 11:18:55.040110  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-715679 logs -n 25: (1.611734949s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629387       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:16 UTC │
	│ delete  │ -p cert-expiration-629387                                                                                                                                                                                                                     │ cert-expiration-629387       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p no-preload-258179 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:18:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:18:25.829290  735340 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:18:25.829443  735340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:25.829455  735340 out.go:374] Setting ErrFile to fd 2...
	I1123 11:18:25.829460  735340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:25.829730  735340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:18:25.830160  735340 out.go:368] Setting JSON to false
	I1123 11:18:25.831121  735340 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14455,"bootTime":1763882251,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:18:25.831190  735340 start.go:143] virtualization:  
	I1123 11:18:25.835135  735340 out.go:179] * [default-k8s-diff-port-103096] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:18:25.838339  735340 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:18:25.838460  735340 notify.go:221] Checking for updates...
	I1123 11:18:25.844631  735340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:18:25.847780  735340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:18:25.850884  735340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:18:25.853933  735340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:18:25.856812  735340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:18:25.860212  735340 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:18:25.860318  735340 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:18:25.889528  735340 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:18:25.889709  735340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:18:25.966794  735340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:18:25.956474063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:18:25.966895  735340 docker.go:319] overlay module found
	I1123 11:18:25.970203  735340 out.go:179] * Using the docker driver based on user configuration
	I1123 11:18:25.973122  735340 start.go:309] selected driver: docker
	I1123 11:18:25.973139  735340 start.go:927] validating driver "docker" against <nil>
	I1123 11:18:25.973154  735340 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:18:25.973910  735340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:18:26.049770  735340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:18:26.040558323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:18:26.049972  735340 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 11:18:26.050242  735340 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:18:26.053152  735340 out.go:179] * Using Docker driver with root privileges
	I1123 11:18:26.056071  735340 cni.go:84] Creating CNI manager for ""
	I1123 11:18:26.056146  735340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:18:26.056159  735340 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 11:18:26.056246  735340 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:18:26.059697  735340 out.go:179] * Starting "default-k8s-diff-port-103096" primary control-plane node in "default-k8s-diff-port-103096" cluster
	I1123 11:18:26.062701  735340 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:18:26.065755  735340 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:18:26.068714  735340 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:18:26.068777  735340 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:18:26.068788  735340 cache.go:65] Caching tarball of preloaded images
	I1123 11:18:26.068819  735340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:18:26.068896  735340 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:18:26.068908  735340 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:18:26.069017  735340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:18:26.069035  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json: {Name:mk28ac05f5a9433f32913884c1bcfb8cd8c6ec08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:26.090177  735340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:18:26.090202  735340 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:18:26.090223  735340 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:18:26.090259  735340 start.go:360] acquireMachinesLock for default-k8s-diff-port-103096: {Name:mk974e47f06d6cbaa10109a8c2801bcc82e17d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:18:26.090374  735340 start.go:364] duration metric: took 94.189µs to acquireMachinesLock for "default-k8s-diff-port-103096"
	I1123 11:18:26.090406  735340 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:18:26.090473  735340 start.go:125] createHost starting for "" (driver="docker")
	W1123 11:18:25.021891  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:27.516233  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:29.516700  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	I1123 11:18:26.093892  735340 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 11:18:26.094154  735340 start.go:159] libmachine.API.Create for "default-k8s-diff-port-103096" (driver="docker")
	I1123 11:18:26.094195  735340 client.go:173] LocalClient.Create starting
	I1123 11:18:26.094281  735340 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 11:18:26.094352  735340 main.go:143] libmachine: Decoding PEM data...
	I1123 11:18:26.094386  735340 main.go:143] libmachine: Parsing certificate...
	I1123 11:18:26.094469  735340 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 11:18:26.094543  735340 main.go:143] libmachine: Decoding PEM data...
	I1123 11:18:26.094835  735340 main.go:143] libmachine: Parsing certificate...
	I1123 11:18:26.095377  735340 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 11:18:26.112809  735340 cli_runner.go:211] docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 11:18:26.112933  735340 network_create.go:284] running [docker network inspect default-k8s-diff-port-103096] to gather additional debugging logs...
	I1123 11:18:26.112983  735340 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096
	W1123 11:18:26.130811  735340 cli_runner.go:211] docker network inspect default-k8s-diff-port-103096 returned with exit code 1
	I1123 11:18:26.130841  735340 network_create.go:287] error running [docker network inspect default-k8s-diff-port-103096]: docker network inspect default-k8s-diff-port-103096: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-103096 not found
	I1123 11:18:26.130854  735340 network_create.go:289] output of [docker network inspect default-k8s-diff-port-103096]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-103096 not found
	
	** /stderr **
	I1123 11:18:26.130965  735340 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:18:26.148307  735340 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
	I1123 11:18:26.148652  735340 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6aa8d6e10592 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:61:e9:d9:d2:34} reservation:<nil>}
	I1123 11:18:26.149035  735340 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b955e06248a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:f3:13:23:8c:71} reservation:<nil>}
	I1123 11:18:26.149316  735340 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9dc6254b6af1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:0e:72:d4:64:a7} reservation:<nil>}
	I1123 11:18:26.149919  735340 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2a620}
	I1123 11:18:26.149943  735340 network_create.go:124] attempt to create docker network default-k8s-diff-port-103096 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 11:18:26.149998  735340 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 default-k8s-diff-port-103096
	I1123 11:18:26.210914  735340 network_create.go:108] docker network default-k8s-diff-port-103096 192.168.85.0/24 created
	I1123 11:18:26.210943  735340 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-103096" container
	I1123 11:18:26.211019  735340 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 11:18:26.235080  735340 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-103096 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --label created_by.minikube.sigs.k8s.io=true
	I1123 11:18:26.259229  735340 oci.go:103] Successfully created a docker volume default-k8s-diff-port-103096
	I1123 11:18:26.259331  735340 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-103096-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --entrypoint /usr/bin/test -v default-k8s-diff-port-103096:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 11:18:26.820741  735340 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-103096
	I1123 11:18:26.820812  735340 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:18:26.820831  735340 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 11:18:26.820906  735340 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-103096:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 11:18:31.517075  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:34.015745  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	I1123 11:18:31.331654  735340 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-103096:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.510708606s)
	I1123 11:18:31.331704  735340 kic.go:203] duration metric: took 4.510869611s to extract preloaded images to volume ...
	W1123 11:18:31.331837  735340 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:18:31.331949  735340 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:18:31.392115  735340 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-103096 --name default-k8s-diff-port-103096 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --network default-k8s-diff-port-103096 --ip 192.168.85.2 --volume default-k8s-diff-port-103096:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:18:31.694526  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Running}}
	I1123 11:18:31.710805  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:18:31.743039  735340 cli_runner.go:164] Run: docker exec default-k8s-diff-port-103096 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:18:31.794427  735340 oci.go:144] the created container "default-k8s-diff-port-103096" has a running status.
	I1123 11:18:31.794454  735340 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa...
	I1123 11:18:32.315108  735340 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:18:32.336682  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:18:32.353254  735340 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:18:32.353272  735340 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-103096 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:18:32.393281  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:18:32.411804  735340 machine.go:94] provisionDockerMachine start ...
	I1123 11:18:32.411927  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:32.429849  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:32.430226  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:32.430243  735340 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:18:32.430887  735340 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45362->127.0.0.1:33822: read: connection reset by peer
	I1123 11:18:35.589177  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:18:35.589204  735340 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103096"
	I1123 11:18:35.589292  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:35.606453  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:35.606764  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:35.606782  735340 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103096 && echo "default-k8s-diff-port-103096" | sudo tee /etc/hostname
	I1123 11:18:35.770929  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:18:35.771014  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:35.789270  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:35.789634  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:35.789657  735340 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:18:35.941731  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:18:35.941757  735340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:18:35.941781  735340 ubuntu.go:190] setting up certificates
	I1123 11:18:35.941792  735340 provision.go:84] configureAuth start
	I1123 11:18:35.941874  735340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:18:35.958738  735340 provision.go:143] copyHostCerts
	I1123 11:18:35.958810  735340 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:18:35.958825  735340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:18:35.958906  735340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:18:35.959002  735340 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:18:35.959012  735340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:18:35.959041  735340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:18:35.959094  735340 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:18:35.959103  735340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:18:35.959128  735340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:18:35.959178  735340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103096 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103096 localhost minikube]
	I1123 11:18:36.097036  735340 provision.go:177] copyRemoteCerts
	I1123 11:18:36.097132  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:18:36.097177  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.114868  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:36.222474  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:18:36.243738  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 11:18:36.262385  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:18:36.279556  735340 provision.go:87] duration metric: took 337.737841ms to configureAuth
	I1123 11:18:36.279586  735340 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:18:36.279776  735340 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:18:36.279877  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.296338  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:36.296667  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:36.296682  735340 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:18:36.689250  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:18:36.689276  735340 machine.go:97] duration metric: took 4.277451625s to provisionDockerMachine
	I1123 11:18:36.689288  735340 client.go:176] duration metric: took 10.595083271s to LocalClient.Create
	I1123 11:18:36.689301  735340 start.go:167] duration metric: took 10.595149735s to libmachine.API.Create "default-k8s-diff-port-103096"
	I1123 11:18:36.689308  735340 start.go:293] postStartSetup for "default-k8s-diff-port-103096" (driver="docker")
	I1123 11:18:36.689318  735340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:18:36.689390  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:18:36.689471  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.707205  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:36.814031  735340 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:18:36.817479  735340 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:18:36.817558  735340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:18:36.817577  735340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:18:36.817645  735340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:18:36.817732  735340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:18:36.817848  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:18:36.825522  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:18:36.843462  735340 start.go:296] duration metric: took 154.138245ms for postStartSetup
	I1123 11:18:36.843837  735340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:18:36.861382  735340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:18:36.861717  735340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:18:36.861774  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.880522  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:36.982728  735340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:18:36.987479  735340 start.go:128] duration metric: took 10.896990677s to createHost
	I1123 11:18:36.987507  735340 start.go:83] releasing machines lock for "default-k8s-diff-port-103096", held for 10.897118344s
	I1123 11:18:36.987581  735340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:18:37.006838  735340 ssh_runner.go:195] Run: cat /version.json
	I1123 11:18:37.006898  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:37.006944  735340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:18:37.007153  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:37.032981  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:37.048975  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:37.141377  735340 ssh_runner.go:195] Run: systemctl --version
	I1123 11:18:37.237032  735340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:18:37.272266  735340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:18:37.276635  735340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:18:37.276708  735340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:18:37.311641  735340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 11:18:37.311665  735340 start.go:496] detecting cgroup driver to use...
	I1123 11:18:37.311697  735340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:18:37.311747  735340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:18:37.332143  735340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:18:37.356000  735340 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:18:37.356067  735340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:18:37.384987  735340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:18:37.406443  735340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:18:37.543989  735340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:18:37.677129  735340 docker.go:234] disabling docker service ...
	I1123 11:18:37.677192  735340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:18:37.698218  735340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:18:37.714550  735340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:18:37.831250  735340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:18:37.953053  735340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:18:37.968990  735340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:18:37.984047  735340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:18:37.984195  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:37.993220  735340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:18:37.993346  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.004775  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.015261  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.025200  735340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:18:38.034416  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.043929  735340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.058776  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.068721  735340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:18:38.076901  735340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:18:38.085166  735340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:18:38.213760  735340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:18:38.378042  735340 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:18:38.378163  735340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:18:38.382212  735340 start.go:564] Will wait 60s for crictl version
	I1123 11:18:38.382328  735340 ssh_runner.go:195] Run: which crictl
	I1123 11:18:38.386216  735340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:18:38.411344  735340 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:18:38.411510  735340 ssh_runner.go:195] Run: crio --version
	I1123 11:18:38.444427  735340 ssh_runner.go:195] Run: crio --version
	I1123 11:18:38.476393  735340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 11:18:36.016346  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	I1123 11:18:37.515196  731689 pod_ready.go:94] pod "coredns-66bc5c9577-9gghc" is "Ready"
	I1123 11:18:37.515224  731689 pod_ready.go:86] duration metric: took 37.005169589s for pod "coredns-66bc5c9577-9gghc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.518154  731689 pod_ready.go:83] waiting for pod "etcd-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.523112  731689 pod_ready.go:94] pod "etcd-embed-certs-715679" is "Ready"
	I1123 11:18:37.523135  731689 pod_ready.go:86] duration metric: took 4.943622ms for pod "etcd-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.525571  731689 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.530861  731689 pod_ready.go:94] pod "kube-apiserver-embed-certs-715679" is "Ready"
	I1123 11:18:37.530891  731689 pod_ready.go:86] duration metric: took 5.296492ms for pod "kube-apiserver-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.534235  731689 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.715086  731689 pod_ready.go:94] pod "kube-controller-manager-embed-certs-715679" is "Ready"
	I1123 11:18:37.715141  731689 pod_ready.go:86] duration metric: took 180.878621ms for pod "kube-controller-manager-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.914780  731689 pod_ready.go:83] waiting for pod "kube-proxy-84tx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.313235  731689 pod_ready.go:94] pod "kube-proxy-84tx6" is "Ready"
	I1123 11:18:38.313262  731689 pod_ready.go:86] duration metric: took 398.450936ms for pod "kube-proxy-84tx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.513681  731689 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.914986  731689 pod_ready.go:94] pod "kube-scheduler-embed-certs-715679" is "Ready"
	I1123 11:18:38.915012  731689 pod_ready.go:86] duration metric: took 401.298298ms for pod "kube-scheduler-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.915025  731689 pod_ready.go:40] duration metric: took 38.409436511s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:18:38.998626  731689 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:18:39.003347  731689 out.go:179] * Done! kubectl is now configured to use "embed-certs-715679" cluster and "default" namespace by default
	I1123 11:18:38.479408  735340 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:18:38.495069  735340 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:18:38.499121  735340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:18:38.509121  735340 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:18:38.509236  735340 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:18:38.509288  735340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:18:38.547921  735340 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:18:38.547943  735340 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:18:38.548007  735340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:18:38.575988  735340 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:18:38.576011  735340 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:18:38.576019  735340 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 11:18:38.576105  735340 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:18:38.576181  735340 ssh_runner.go:195] Run: crio config
	I1123 11:18:38.638912  735340 cni.go:84] Creating CNI manager for ""
	I1123 11:18:38.638936  735340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:18:38.638972  735340 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:18:38.639001  735340 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103096 NodeName:default-k8s-diff-port-103096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:18:38.639133  735340 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103096"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:18:38.639210  735340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:18:38.647499  735340 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:18:38.647610  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:18:38.655366  735340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 11:18:38.668583  735340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:18:38.681396  735340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1123 11:18:38.694341  735340 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:18:38.700583  735340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:18:38.712725  735340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:18:38.823694  735340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:18:38.840611  735340 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096 for IP: 192.168.85.2
	I1123 11:18:38.840675  735340 certs.go:195] generating shared ca certs ...
	I1123 11:18:38.840699  735340 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:38.840850  735340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:18:38.840897  735340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:18:38.840911  735340 certs.go:257] generating profile certs ...
	I1123 11:18:38.840976  735340 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key
	I1123 11:18:38.840997  735340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt with IP's: []
	I1123 11:18:38.930920  735340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt ...
	I1123 11:18:38.931020  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: {Name:mk273be5686a3b6c8a5d0746afccab384de76964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:38.931386  735340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key ...
	I1123 11:18:38.931439  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key: {Name:mka1a4873bcfba6634f88c4b753c01b4ac26ca8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:38.931647  735340 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d
	I1123 11:18:38.931706  735340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 11:18:39.160290  735340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d ...
	I1123 11:18:39.160348  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d: {Name:mk5076a51d2325b6ba82fed17c9b0aea54ccbf08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.160739  735340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d ...
	I1123 11:18:39.160766  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d: {Name:mk7fb059cd30d43b6cc8524d75ff59bf0dacb592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.160965  735340 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt
	I1123 11:18:39.161079  735340 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key
	I1123 11:18:39.161180  735340 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key
	I1123 11:18:39.161203  735340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt with IP's: []
	I1123 11:18:39.829994  735340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt ...
	I1123 11:18:39.830028  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt: {Name:mkad044fd7392c1620d99108ef45826dc90ecb12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.830259  735340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key ...
	I1123 11:18:39.830278  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key: {Name:mkf652215386beb6eb90f8d14b447e3df662978c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.830482  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:18:39.830534  735340 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:18:39.830548  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:18:39.830579  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:18:39.830610  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:18:39.830637  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:18:39.830685  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:18:39.831294  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:18:39.851924  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:18:39.871206  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:18:39.899849  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:18:39.928546  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 11:18:39.947243  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:18:39.965657  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:18:39.985966  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:18:40.010413  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:18:40.044105  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:18:40.067761  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:18:40.094063  735340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:18:40.109968  735340 ssh_runner.go:195] Run: openssl version
	I1123 11:18:40.117205  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:18:40.127173  735340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:18:40.131983  735340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:18:40.132055  735340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:18:40.175109  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:18:40.184874  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:18:40.195013  735340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:18:40.199543  735340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:18:40.199716  735340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:18:40.242859  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:18:40.251622  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:18:40.260311  735340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:18:40.264196  735340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:18:40.264263  735340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:18:40.305506  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:18:40.314074  735340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:18:40.317679  735340 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:18:40.317751  735340 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:18:40.317844  735340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:18:40.317905  735340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:18:40.347414  735340 cri.go:89] found id: ""
	I1123 11:18:40.347486  735340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:18:40.361061  735340 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:18:40.370838  735340 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:18:40.370903  735340 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:18:40.382218  735340 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:18:40.382250  735340 kubeadm.go:158] found existing configuration files:
	
	I1123 11:18:40.382303  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 11:18:40.391652  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:18:40.391715  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:18:40.400294  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 11:18:40.410461  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:18:40.410546  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:18:40.418315  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 11:18:40.426205  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:18:40.426302  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:18:40.435523  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 11:18:40.443634  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:18:40.443699  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:18:40.451466  735340 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:18:40.489796  735340 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 11:18:40.490024  735340 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:18:40.519763  735340 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:18:40.519908  735340 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:18:40.519968  735340 kubeadm.go:319] OS: Linux
	I1123 11:18:40.520044  735340 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:18:40.520125  735340 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:18:40.520203  735340 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:18:40.520286  735340 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:18:40.520364  735340 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:18:40.520446  735340 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:18:40.520530  735340 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:18:40.520616  735340 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:18:40.520698  735340 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:18:40.590882  735340 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:18:40.591046  735340 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:18:40.591148  735340 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 11:18:40.599752  735340 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:18:40.605175  735340 out.go:252]   - Generating certificates and keys ...
	I1123 11:18:40.605340  735340 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:18:40.605459  735340 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 11:18:40.782680  735340 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:18:40.837154  735340 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:18:41.733658  735340 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:18:41.960830  735340 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:18:42.450442  735340 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:18:42.450619  735340 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-103096 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:18:43.540437  735340 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:18:43.540921  735340 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-103096 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:18:43.697559  735340 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:18:44.878159  735340 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:18:45.430325  735340 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:18:45.430652  735340 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:18:45.699467  735340 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:18:46.541371  735340 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 11:18:46.665127  735340 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:18:47.071889  735340 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:18:47.762623  735340 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:18:47.763368  735340 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:18:47.766112  735340 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 11:18:47.769637  735340 out.go:252]   - Booting up control plane ...
	I1123 11:18:47.769766  735340 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:18:47.769878  735340 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:18:47.769969  735340 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:18:47.793950  735340 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:18:47.794068  735340 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 11:18:47.802964  735340 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 11:18:47.803072  735340 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:18:47.805427  735340 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:18:47.962816  735340 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 11:18:47.962938  735340 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 11:18:49.467919  735340 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501570403s
	I1123 11:18:49.468093  735340 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 11:18:49.468185  735340 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1123 11:18:49.468276  735340 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 11:18:49.468362  735340 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.876561623Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.881211576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.881246522Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.881266757Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.885812961Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.8858479Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.885874124Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.89209214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.892126299Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.892151924Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.895926754Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.895958131Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.962440658Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9942fbdf-97b3-4de1-b49e-484e383010c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.965125647Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b20ef978-a1c4-4d37-bb96-da8f60660669 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.968235598Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper" id=9631d9f0-077f-451a-a901-572d78bc50f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.968417748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.985630043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.986659878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.028761128Z" level=info msg="Created container c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper" id=9631d9f0-077f-451a-a901-572d78bc50f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.034233779Z" level=info msg="Starting container: c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986" id=5fe7f6f6-fe38-41ff-bbc4-f337d54493e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.038974014Z" level=info msg="Started container" PID=1715 containerID=c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper id=5fe7f6f6-fe38-41ff-bbc4-f337d54493e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06e66ba8280f058b706a5e01400a330ddd899b4371c9b8506409a133acd295c9
	Nov 23 11:18:49 embed-certs-715679 conmon[1712]: conmon c66279f8f6e240c97df3 <ninfo>: container 1715 exited with status 1
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.37983805Z" level=info msg="Removing container: a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670" id=d088b57e-6467-4fea-bcff-2dfcc597bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.389755798Z" level=info msg="Error loading conmon cgroup of container a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670: cgroup deleted" id=d088b57e-6467-4fea-bcff-2dfcc597bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.393206558Z" level=info msg="Removed container a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper" id=d088b57e-6467-4fea-bcff-2dfcc597bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c66279f8f6e24       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   06e66ba8280f0       dashboard-metrics-scraper-6ffb444bf9-pqt65   kubernetes-dashboard
	0635b3b4249e8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   bdac71b37f1d6       storage-provisioner                          kube-system
	2cf450fb7ea4a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   925a16ac680b4       kubernetes-dashboard-855c9754f9-jz7sf        kubernetes-dashboard
	6d43e5477c835       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   7204d9966bf4c       coredns-66bc5c9577-9gghc                     kube-system
	a25ac353726d3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   d1be927d705c6       busybox                                      default
	fa13ac96e1521       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   bdac71b37f1d6       storage-provisioner                          kube-system
	2e9c10cadc1c9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   54a3dbde4bf04       kindnet-gh5h2                                kube-system
	75d7b06e8aa7d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   1e6e55ed3f8b9       kube-proxy-84tx6                             kube-system
	20df221b7dfb3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   808b5cf266285       kube-scheduler-embed-certs-715679            kube-system
	3705907a0fd2a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6a0a3d8867a3a       kube-apiserver-embed-certs-715679            kube-system
	c20c209f3dc2b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   31ae4bf9c577e       kube-controller-manager-embed-certs-715679   kube-system
	d4260b2942288       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   bfb7a0c0ce678       etcd-embed-certs-715679                      kube-system
	
	
	==> coredns [6d43e5477c8354b480be323d501bde9ccdf2ce5fb0a610110f36cc963145e4b4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39641 - 33274 "HINFO IN 1885163275542618875.5777297666075694139. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033896281s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-715679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-715679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-715679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_16_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:16:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-715679
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:18:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:17:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-715679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0f9e54f4-bafa-460f-a78e-697026168606
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-9gghc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-715679                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-gh5h2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-715679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-715679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-84tx6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-715679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pqt65    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jz7sf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-715679 event: Registered Node embed-certs-715679 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-715679 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-715679 event: Registered Node embed-certs-715679 in Controller
	
	
	==> dmesg <==
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d4260b294228835eee6fa398c0acc73e7c5e3063b52483fb95cfd3e2c8d0cb77] <==
	{"level":"warn","ts":"2025-11-23T11:17:56.529535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.588295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.640156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.704172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.728654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.745521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.769297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.793663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.805071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.827789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.853122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.868601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.889395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.907450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.924434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.947964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.961536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.979271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.009868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.013969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.031415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.052678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.076387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.116858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.193474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36408","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:18:55 up  4:01,  0 user,  load average: 3.45, 3.51, 2.97
	Linux embed-certs-715679 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e9c10cadc1c93a0579863766c9dd59aaf1ebf2733e6a3127e1e121114213768] <==
	I1123 11:17:59.695838       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:17:59.696216       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:17:59.696775       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:17:59.696845       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:17:59.696892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:17:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:17:59.875571       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:17:59.875589       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:17:59.875598       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:17:59.876299       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:18:29.875693       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:18:29.875978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:18:29.876231       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:18:29.876369       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:18:31.276110       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:18:31.276143       1 metrics.go:72] Registering metrics
	I1123 11:18:31.276207       1 controller.go:711] "Syncing nftables rules"
	I1123 11:18:39.876253       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:18:39.876316       1 main.go:301] handling current node
	I1123 11:18:49.883255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:18:49.883294       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3705907a0fd2afd823aab9cf790cd7cbe11c78e937bd2144bafe03ce3ae8913c] <==
	I1123 11:17:58.530468       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:17:58.530474       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:17:58.534076       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:17:58.534089       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 11:17:58.545299       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 11:17:58.545353       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:17:58.545478       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:17:58.545680       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 11:17:58.545732       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:17:58.549005       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 11:17:58.553694       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:17:58.568357       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:17:58.570332       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1123 11:17:58.584644       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:17:58.958460       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:17:59.025267       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:17:59.365010       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:17:59.566452       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:17:59.680684       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:17:59.756661       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:17:59.884097       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.188.128"}
	I1123 11:17:59.919942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.46.78"}
	I1123 11:18:02.482164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:18:02.887223       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:18:03.032976       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c20c209f3dc2baa15a537d778f7bcaa21c1a0e5778f19fb4930042fa54f7c132] <==
	I1123 11:18:02.489126       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:18:02.489754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:18:02.490238       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:18:02.491279       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:18:02.491533       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-715679"
	I1123 11:18:02.491677       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:18:02.495116       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:18:02.496324       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 11:18:02.497836       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 11:18:02.502095       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 11:18:02.503288       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:18:02.505086       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:18:02.509657       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:18:02.511083       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:18:02.514598       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:18:02.516983       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:18:02.520324       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:18:02.525325       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 11:18:02.525336       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:18:02.525456       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:18:02.526040       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:18:02.526087       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:18:02.526104       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:18:02.526567       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:18:02.541499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [75d7b06e8aa7dcd731688456f75103f5b70f9d0a304f7bc68eb282728b5c6cd5] <==
	I1123 11:17:59.776768       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:17:59.955903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:18:00.197957       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:18:00.201194       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:18:00.201317       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:18:00.272471       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:18:00.272615       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:18:00.286484       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:18:00.286983       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:18:00.287001       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:18:00.289226       1 config.go:200] "Starting service config controller"
	I1123 11:18:00.289357       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:18:00.289455       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:18:00.289498       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:18:00.289556       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:18:00.289598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:18:00.294824       1 config.go:309] "Starting node config controller"
	I1123 11:18:00.294931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:18:00.294966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:18:00.389644       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:18:00.389744       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:18:00.389659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20df221b7dfb3ece226ab60848a3397d3f42e4fc7e2292d50c22f6f58131c199] <==
	I1123 11:17:56.331071       1 serving.go:386] Generated self-signed cert in-memory
	I1123 11:17:58.691651       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:17:58.695913       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:17:58.710541       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 11:17:58.710579       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 11:17:58.710617       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:58.710623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:58.710636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:17:58.710643       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:17:58.728332       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:17:58.732372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:17:58.819410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:17:58.819471       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 11:17:58.819564       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:18:07 embed-certs-715679 kubelet[782]: I1123 11:18:07.297023     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 11:18:08 embed-certs-715679 kubelet[782]: I1123 11:18:08.247766     782 scope.go:117] "RemoveContainer" containerID="dd179f2b97e7ee363d054856bdd28a53cff4cc38aacd6faa8fe879a4264ce0c8"
	Nov 23 11:18:09 embed-certs-715679 kubelet[782]: I1123 11:18:09.253513     782 scope.go:117] "RemoveContainer" containerID="dd179f2b97e7ee363d054856bdd28a53cff4cc38aacd6faa8fe879a4264ce0c8"
	Nov 23 11:18:09 embed-certs-715679 kubelet[782]: I1123 11:18:09.254191     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:09 embed-certs-715679 kubelet[782]: E1123 11:18:09.254497     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:10 embed-certs-715679 kubelet[782]: I1123 11:18:10.255329     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:10 embed-certs-715679 kubelet[782]: E1123 11:18:10.255503     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:13 embed-certs-715679 kubelet[782]: I1123 11:18:13.281379     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jz7sf" podStartSLOduration=1.090559548 podStartE2EDuration="10.280699811s" podCreationTimestamp="2025-11-23 11:18:03 +0000 UTC" firstStartedPulling="2025-11-23 11:18:03.535441886 +0000 UTC m=+11.791551255" lastFinishedPulling="2025-11-23 11:18:12.725582149 +0000 UTC m=+20.981691518" observedRunningTime="2025-11-23 11:18:13.280448055 +0000 UTC m=+21.536557440" watchObservedRunningTime="2025-11-23 11:18:13.280699811 +0000 UTC m=+21.536809180"
	Nov 23 11:18:13 embed-certs-715679 kubelet[782]: I1123 11:18:13.424110     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:13 embed-certs-715679 kubelet[782]: E1123 11:18:13.424324     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:23 embed-certs-715679 kubelet[782]: I1123 11:18:23.961745     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:24 embed-certs-715679 kubelet[782]: I1123 11:18:24.303395     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:24 embed-certs-715679 kubelet[782]: I1123 11:18:24.303701     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:24 embed-certs-715679 kubelet[782]: E1123 11:18:24.303858     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:30 embed-certs-715679 kubelet[782]: I1123 11:18:30.322215     782 scope.go:117] "RemoveContainer" containerID="fa13ac96e1521657e764697d7ba6ea5ca642fe85f9ffe908b95e26442c09866b"
	Nov 23 11:18:33 embed-certs-715679 kubelet[782]: I1123 11:18:33.424192     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:33 embed-certs-715679 kubelet[782]: E1123 11:18:33.424388     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:48 embed-certs-715679 kubelet[782]: I1123 11:18:48.960979     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:49 embed-certs-715679 kubelet[782]: I1123 11:18:49.378446     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:50 embed-certs-715679 kubelet[782]: I1123 11:18:50.382391     782 scope.go:117] "RemoveContainer" containerID="c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986"
	Nov 23 11:18:50 embed-certs-715679 kubelet[782]: E1123 11:18:50.383243     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:52 embed-certs-715679 kubelet[782]: E1123 11:18:52.199650     782 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio-06e66ba8280f058b706a5e01400a330ddd899b4371c9b8506409a133acd295c9\": RecentStats: unable to find data in memory cache]"
	Nov 23 11:18:52 embed-certs-715679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:18:52 embed-certs-715679 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:18:52 embed-certs-715679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2cf450fb7ea4ad6a81a7878a6098c4aab3262b246b1d1326e5ce26be1e08beba] <==
	2025/11/23 11:18:12 Using namespace: kubernetes-dashboard
	2025/11/23 11:18:12 Using in-cluster config to connect to apiserver
	2025/11/23 11:18:12 Using secret token for csrf signing
	2025/11/23 11:18:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:18:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:18:12 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 11:18:12 Generating JWE encryption key
	2025/11/23 11:18:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:18:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:18:13 Initializing JWE encryption key from synchronized object
	2025/11/23 11:18:13 Creating in-cluster Sidecar client
	2025/11/23 11:18:13 Serving insecurely on HTTP port: 9090
	2025/11/23 11:18:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:18:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:18:12 Starting overwatch
	
	
	==> storage-provisioner [0635b3b4249e89f567cbfcf4fca7e7c36f6918fc08b8db8d3517ee5cc414b46a] <==
	I1123 11:18:30.427360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:18:30.439250       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:18:30.439304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:18:30.445838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:33.900799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:38.161242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:41.760150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:44.814843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:47.837229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:47.844635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:47.845184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:18:47.845496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-715679_f4a29ed1-26c5-4062-9639-543f68ec6c6e!
	I1123 11:18:47.846484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69cca960-8539-4f65-91a5-a2434eb78e5c", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-715679_f4a29ed1-26c5-4062-9639-543f68ec6c6e became leader
	W1123 11:18:47.855131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:47.862719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:47.947685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-715679_f4a29ed1-26c5-4062-9639-543f68ec6c6e!
	W1123 11:18:49.866932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:49.873043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:51.878075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:51.902200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:53.904996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:53.921658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:55.925874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:55.940262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fa13ac96e1521657e764697d7ba6ea5ca642fe85f9ffe908b95e26442c09866b] <==
	I1123 11:17:59.808173       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:18:29.827077       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-715679 -n embed-certs-715679
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-715679 -n embed-certs-715679: exit status 2 (405.924855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-715679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-715679
helpers_test.go:243: (dbg) docker inspect embed-certs-715679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944",
	        "Created": "2025-11-23T11:15:57.805460889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731825,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:17:44.944833739Z",
	            "FinishedAt": "2025-11-23T11:17:43.917904612Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/hosts",
	        "LogPath": "/var/lib/docker/containers/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944/bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944-json.log",
	        "Name": "/embed-certs-715679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-715679:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-715679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf3b5a2f915e37cc7c4e562e9252bbe634a1633192a473ce5f7665d8393b7944",
	                "LowerDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a67f7d2a9c42fde4eafff1c04c81aef4ee98e43673b7b3b09f7871b72d9c50c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-715679",
	                "Source": "/var/lib/docker/volumes/embed-certs-715679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-715679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-715679",
	                "name.minikube.sigs.k8s.io": "embed-certs-715679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "123a25548df68610e38fab9bc466be27653489cae74348f0815b597a21ebf459",
	            "SandboxKey": "/var/run/docker/netns/123a25548df6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-715679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:bc:58:a6:86:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9dc6254b6af11e97f0c613269fd92518cae572b3a5313c8e4edd68d21062116b",
	                    "EndpointID": "369cb5d862fe089efebfffcdfec0a43b6f83f043c1a8922f3cc442fe856c38d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-715679",
	                        "bf3b5a2f915e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679: exit status 2 (432.769809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-715679 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-715679 logs -n 25: (1.635087729s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:14 UTC │ 23 Nov 25 11:15 UTC │
	│ image   │ old-k8s-version-378086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ pause   │ -p old-k8s-version-378086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │                     │
	│ start   │ -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629387       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:16 UTC │
	│ delete  │ -p cert-expiration-629387                                                                                                                                                                                                                     │ cert-expiration-629387       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p no-preload-258179 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:18:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:18:25.829290  735340 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:18:25.829443  735340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:25.829455  735340 out.go:374] Setting ErrFile to fd 2...
	I1123 11:18:25.829460  735340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:18:25.829730  735340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:18:25.830160  735340 out.go:368] Setting JSON to false
	I1123 11:18:25.831121  735340 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14455,"bootTime":1763882251,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:18:25.831190  735340 start.go:143] virtualization:  
	I1123 11:18:25.835135  735340 out.go:179] * [default-k8s-diff-port-103096] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:18:25.838339  735340 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:18:25.838460  735340 notify.go:221] Checking for updates...
	I1123 11:18:25.844631  735340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:18:25.847780  735340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:18:25.850884  735340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:18:25.853933  735340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:18:25.856812  735340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:18:25.860212  735340 config.go:182] Loaded profile config "embed-certs-715679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:18:25.860318  735340 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:18:25.889528  735340 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:18:25.889709  735340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:18:25.966794  735340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:18:25.956474063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:18:25.966895  735340 docker.go:319] overlay module found
	I1123 11:18:25.970203  735340 out.go:179] * Using the docker driver based on user configuration
	I1123 11:18:25.973122  735340 start.go:309] selected driver: docker
	I1123 11:18:25.973139  735340 start.go:927] validating driver "docker" against <nil>
	I1123 11:18:25.973154  735340 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:18:25.973910  735340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:18:26.049770  735340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:18:26.040558323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:18:26.049972  735340 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 11:18:26.050242  735340 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:18:26.053152  735340 out.go:179] * Using Docker driver with root privileges
	I1123 11:18:26.056071  735340 cni.go:84] Creating CNI manager for ""
	I1123 11:18:26.056146  735340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:18:26.056159  735340 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 11:18:26.056246  735340 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:18:26.059697  735340 out.go:179] * Starting "default-k8s-diff-port-103096" primary control-plane node in "default-k8s-diff-port-103096" cluster
	I1123 11:18:26.062701  735340 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:18:26.065755  735340 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:18:26.068714  735340 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:18:26.068777  735340 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:18:26.068788  735340 cache.go:65] Caching tarball of preloaded images
	I1123 11:18:26.068819  735340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:18:26.068896  735340 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:18:26.068908  735340 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:18:26.069017  735340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:18:26.069035  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json: {Name:mk28ac05f5a9433f32913884c1bcfb8cd8c6ec08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:26.090177  735340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:18:26.090202  735340 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:18:26.090223  735340 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:18:26.090259  735340 start.go:360] acquireMachinesLock for default-k8s-diff-port-103096: {Name:mk974e47f06d6cbaa10109a8c2801bcc82e17d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:18:26.090374  735340 start.go:364] duration metric: took 94.189µs to acquireMachinesLock for "default-k8s-diff-port-103096"
	I1123 11:18:26.090406  735340 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:18:26.090473  735340 start.go:125] createHost starting for "" (driver="docker")
	W1123 11:18:25.021891  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:27.516233  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:29.516700  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	I1123 11:18:26.093892  735340 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 11:18:26.094154  735340 start.go:159] libmachine.API.Create for "default-k8s-diff-port-103096" (driver="docker")
	I1123 11:18:26.094195  735340 client.go:173] LocalClient.Create starting
	I1123 11:18:26.094281  735340 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 11:18:26.094352  735340 main.go:143] libmachine: Decoding PEM data...
	I1123 11:18:26.094386  735340 main.go:143] libmachine: Parsing certificate...
	I1123 11:18:26.094469  735340 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 11:18:26.094543  735340 main.go:143] libmachine: Decoding PEM data...
	I1123 11:18:26.094835  735340 main.go:143] libmachine: Parsing certificate...
	I1123 11:18:26.095377  735340 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 11:18:26.112809  735340 cli_runner.go:211] docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 11:18:26.112933  735340 network_create.go:284] running [docker network inspect default-k8s-diff-port-103096] to gather additional debugging logs...
	I1123 11:18:26.112983  735340 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096
	W1123 11:18:26.130811  735340 cli_runner.go:211] docker network inspect default-k8s-diff-port-103096 returned with exit code 1
	I1123 11:18:26.130841  735340 network_create.go:287] error running [docker network inspect default-k8s-diff-port-103096]: docker network inspect default-k8s-diff-port-103096: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-103096 not found
	I1123 11:18:26.130854  735340 network_create.go:289] output of [docker network inspect default-k8s-diff-port-103096]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-103096 not found
	
	** /stderr **
	I1123 11:18:26.130965  735340 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:18:26.148307  735340 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
	I1123 11:18:26.148652  735340 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6aa8d6e10592 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:61:e9:d9:d2:34} reservation:<nil>}
	I1123 11:18:26.149035  735340 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b955e06248a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:f3:13:23:8c:71} reservation:<nil>}
	I1123 11:18:26.149316  735340 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9dc6254b6af1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:0e:72:d4:64:a7} reservation:<nil>}
	I1123 11:18:26.149919  735340 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2a620}
	I1123 11:18:26.149943  735340 network_create.go:124] attempt to create docker network default-k8s-diff-port-103096 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 11:18:26.149998  735340 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 default-k8s-diff-port-103096
	I1123 11:18:26.210914  735340 network_create.go:108] docker network default-k8s-diff-port-103096 192.168.85.0/24 created
	I1123 11:18:26.210943  735340 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-103096" container
	I1123 11:18:26.211019  735340 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 11:18:26.235080  735340 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-103096 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --label created_by.minikube.sigs.k8s.io=true
	I1123 11:18:26.259229  735340 oci.go:103] Successfully created a docker volume default-k8s-diff-port-103096
	I1123 11:18:26.259331  735340 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-103096-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --entrypoint /usr/bin/test -v default-k8s-diff-port-103096:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 11:18:26.820741  735340 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-103096
	I1123 11:18:26.820812  735340 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:18:26.820831  735340 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 11:18:26.820906  735340 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-103096:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 11:18:31.517075  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	W1123 11:18:34.015745  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	I1123 11:18:31.331654  735340 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-103096:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.510708606s)
	I1123 11:18:31.331704  735340 kic.go:203] duration metric: took 4.510869611s to extract preloaded images to volume ...
	W1123 11:18:31.331837  735340 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:18:31.331949  735340 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:18:31.392115  735340 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-103096 --name default-k8s-diff-port-103096 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-103096 --network default-k8s-diff-port-103096 --ip 192.168.85.2 --volume default-k8s-diff-port-103096:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:18:31.694526  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Running}}
	I1123 11:18:31.710805  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:18:31.743039  735340 cli_runner.go:164] Run: docker exec default-k8s-diff-port-103096 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:18:31.794427  735340 oci.go:144] the created container "default-k8s-diff-port-103096" has a running status.
	I1123 11:18:31.794454  735340 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa...
	I1123 11:18:32.315108  735340 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:18:32.336682  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:18:32.353254  735340 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:18:32.353272  735340 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-103096 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:18:32.393281  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:18:32.411804  735340 machine.go:94] provisionDockerMachine start ...
	I1123 11:18:32.411927  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:32.429849  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:32.430226  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:32.430243  735340 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:18:32.430887  735340 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45362->127.0.0.1:33822: read: connection reset by peer
	I1123 11:18:35.589177  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:18:35.589204  735340 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103096"
	I1123 11:18:35.589292  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:35.606453  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:35.606764  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:35.606782  735340 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103096 && echo "default-k8s-diff-port-103096" | sudo tee /etc/hostname
	I1123 11:18:35.770929  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:18:35.771014  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:35.789270  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:35.789634  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:35.789657  735340 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:18:35.941731  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:18:35.941757  735340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:18:35.941781  735340 ubuntu.go:190] setting up certificates
	I1123 11:18:35.941792  735340 provision.go:84] configureAuth start
	I1123 11:18:35.941874  735340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:18:35.958738  735340 provision.go:143] copyHostCerts
	I1123 11:18:35.958810  735340 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:18:35.958825  735340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:18:35.958906  735340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:18:35.959002  735340 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:18:35.959012  735340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:18:35.959041  735340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:18:35.959094  735340 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:18:35.959103  735340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:18:35.959128  735340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:18:35.959178  735340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103096 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103096 localhost minikube]
	I1123 11:18:36.097036  735340 provision.go:177] copyRemoteCerts
	I1123 11:18:36.097132  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:18:36.097177  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.114868  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:36.222474  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:18:36.243738  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 11:18:36.262385  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:18:36.279556  735340 provision.go:87] duration metric: took 337.737841ms to configureAuth
	I1123 11:18:36.279586  735340 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:18:36.279776  735340 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:18:36.279877  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.296338  735340 main.go:143] libmachine: Using SSH client type: native
	I1123 11:18:36.296667  735340 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1123 11:18:36.296682  735340 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:18:36.689250  735340 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:18:36.689276  735340 machine.go:97] duration metric: took 4.277451625s to provisionDockerMachine
	I1123 11:18:36.689288  735340 client.go:176] duration metric: took 10.595083271s to LocalClient.Create
	I1123 11:18:36.689301  735340 start.go:167] duration metric: took 10.595149735s to libmachine.API.Create "default-k8s-diff-port-103096"
	I1123 11:18:36.689308  735340 start.go:293] postStartSetup for "default-k8s-diff-port-103096" (driver="docker")
	I1123 11:18:36.689318  735340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:18:36.689390  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:18:36.689471  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.707205  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:36.814031  735340 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:18:36.817479  735340 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:18:36.817558  735340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:18:36.817577  735340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:18:36.817645  735340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:18:36.817732  735340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:18:36.817848  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:18:36.825522  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:18:36.843462  735340 start.go:296] duration metric: took 154.138245ms for postStartSetup
	I1123 11:18:36.843837  735340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:18:36.861382  735340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:18:36.861717  735340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:18:36.861774  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:36.880522  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:36.982728  735340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:18:36.987479  735340 start.go:128] duration metric: took 10.896990677s to createHost
	I1123 11:18:36.987507  735340 start.go:83] releasing machines lock for "default-k8s-diff-port-103096", held for 10.897118344s
	I1123 11:18:36.987581  735340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:18:37.006838  735340 ssh_runner.go:195] Run: cat /version.json
	I1123 11:18:37.006898  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:37.006944  735340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:18:37.007153  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:18:37.032981  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:37.048975  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:18:37.141377  735340 ssh_runner.go:195] Run: systemctl --version
	I1123 11:18:37.237032  735340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:18:37.272266  735340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:18:37.276635  735340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:18:37.276708  735340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:18:37.311641  735340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 11:18:37.311665  735340 start.go:496] detecting cgroup driver to use...
	I1123 11:18:37.311697  735340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:18:37.311747  735340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:18:37.332143  735340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:18:37.356000  735340 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:18:37.356067  735340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:18:37.384987  735340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:18:37.406443  735340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:18:37.543989  735340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:18:37.677129  735340 docker.go:234] disabling docker service ...
	I1123 11:18:37.677192  735340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:18:37.698218  735340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:18:37.714550  735340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:18:37.831250  735340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:18:37.953053  735340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:18:37.968990  735340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:18:37.984047  735340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:18:37.984195  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:37.993220  735340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:18:37.993346  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.004775  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.015261  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.025200  735340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:18:38.034416  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.043929  735340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.058776  735340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:18:38.068721  735340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:18:38.076901  735340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:18:38.085166  735340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:18:38.213760  735340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:18:38.378042  735340 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:18:38.378163  735340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:18:38.382212  735340 start.go:564] Will wait 60s for crictl version
	I1123 11:18:38.382328  735340 ssh_runner.go:195] Run: which crictl
	I1123 11:18:38.386216  735340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:18:38.411344  735340 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:18:38.411510  735340 ssh_runner.go:195] Run: crio --version
	I1123 11:18:38.444427  735340 ssh_runner.go:195] Run: crio --version
	I1123 11:18:38.476393  735340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 11:18:36.016346  731689 pod_ready.go:104] pod "coredns-66bc5c9577-9gghc" is not "Ready", error: <nil>
	I1123 11:18:37.515196  731689 pod_ready.go:94] pod "coredns-66bc5c9577-9gghc" is "Ready"
	I1123 11:18:37.515224  731689 pod_ready.go:86] duration metric: took 37.005169589s for pod "coredns-66bc5c9577-9gghc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.518154  731689 pod_ready.go:83] waiting for pod "etcd-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.523112  731689 pod_ready.go:94] pod "etcd-embed-certs-715679" is "Ready"
	I1123 11:18:37.523135  731689 pod_ready.go:86] duration metric: took 4.943622ms for pod "etcd-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.525571  731689 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.530861  731689 pod_ready.go:94] pod "kube-apiserver-embed-certs-715679" is "Ready"
	I1123 11:18:37.530891  731689 pod_ready.go:86] duration metric: took 5.296492ms for pod "kube-apiserver-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.534235  731689 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.715086  731689 pod_ready.go:94] pod "kube-controller-manager-embed-certs-715679" is "Ready"
	I1123 11:18:37.715141  731689 pod_ready.go:86] duration metric: took 180.878621ms for pod "kube-controller-manager-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:37.914780  731689 pod_ready.go:83] waiting for pod "kube-proxy-84tx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.313235  731689 pod_ready.go:94] pod "kube-proxy-84tx6" is "Ready"
	I1123 11:18:38.313262  731689 pod_ready.go:86] duration metric: took 398.450936ms for pod "kube-proxy-84tx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.513681  731689 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.914986  731689 pod_ready.go:94] pod "kube-scheduler-embed-certs-715679" is "Ready"
	I1123 11:18:38.915012  731689 pod_ready.go:86] duration metric: took 401.298298ms for pod "kube-scheduler-embed-certs-715679" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:18:38.915025  731689 pod_ready.go:40] duration metric: took 38.409436511s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:18:38.998626  731689 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:18:39.003347  731689 out.go:179] * Done! kubectl is now configured to use "embed-certs-715679" cluster and "default" namespace by default
	I1123 11:18:38.479408  735340 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:18:38.495069  735340 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:18:38.499121  735340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:18:38.509121  735340 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:18:38.509236  735340 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:18:38.509288  735340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:18:38.547921  735340 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:18:38.547943  735340 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:18:38.548007  735340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:18:38.575988  735340 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:18:38.576011  735340 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:18:38.576019  735340 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 11:18:38.576105  735340 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:18:38.576181  735340 ssh_runner.go:195] Run: crio config
	I1123 11:18:38.638912  735340 cni.go:84] Creating CNI manager for ""
	I1123 11:18:38.638936  735340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:18:38.638972  735340 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:18:38.639001  735340 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103096 NodeName:default-k8s-diff-port-103096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:18:38.639133  735340 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103096"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:18:38.639210  735340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:18:38.647499  735340 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:18:38.647610  735340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:18:38.655366  735340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 11:18:38.668583  735340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:18:38.681396  735340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1123 11:18:38.694341  735340 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:18:38.700583  735340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:18:38.712725  735340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:18:38.823694  735340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:18:38.840611  735340 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096 for IP: 192.168.85.2
	I1123 11:18:38.840675  735340 certs.go:195] generating shared ca certs ...
	I1123 11:18:38.840699  735340 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:38.840850  735340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:18:38.840897  735340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:18:38.840911  735340 certs.go:257] generating profile certs ...
	I1123 11:18:38.840976  735340 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key
	I1123 11:18:38.840997  735340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt with IP's: []
	I1123 11:18:38.930920  735340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt ...
	I1123 11:18:38.931020  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: {Name:mk273be5686a3b6c8a5d0746afccab384de76964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:38.931386  735340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key ...
	I1123 11:18:38.931439  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key: {Name:mka1a4873bcfba6634f88c4b753c01b4ac26ca8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:38.931647  735340 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d
	I1123 11:18:38.931706  735340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 11:18:39.160290  735340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d ...
	I1123 11:18:39.160348  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d: {Name:mk5076a51d2325b6ba82fed17c9b0aea54ccbf08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.160739  735340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d ...
	I1123 11:18:39.160766  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d: {Name:mk7fb059cd30d43b6cc8524d75ff59bf0dacb592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.160965  735340 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt.3484d55d -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt
	I1123 11:18:39.161079  735340 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key
	I1123 11:18:39.161180  735340 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key
	I1123 11:18:39.161203  735340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt with IP's: []
	I1123 11:18:39.829994  735340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt ...
	I1123 11:18:39.830028  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt: {Name:mkad044fd7392c1620d99108ef45826dc90ecb12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.830259  735340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key ...
	I1123 11:18:39.830278  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key: {Name:mkf652215386beb6eb90f8d14b447e3df662978c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:18:39.830482  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:18:39.830534  735340 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:18:39.830548  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:18:39.830579  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:18:39.830610  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:18:39.830637  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:18:39.830685  735340 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:18:39.831294  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:18:39.851924  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:18:39.871206  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:18:39.899849  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:18:39.928546  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 11:18:39.947243  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:18:39.965657  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:18:39.985966  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:18:40.010413  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:18:40.044105  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:18:40.067761  735340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:18:40.094063  735340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:18:40.109968  735340 ssh_runner.go:195] Run: openssl version
	I1123 11:18:40.117205  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:18:40.127173  735340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:18:40.131983  735340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:18:40.132055  735340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:18:40.175109  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:18:40.184874  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:18:40.195013  735340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:18:40.199543  735340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:18:40.199716  735340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:18:40.242859  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:18:40.251622  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:18:40.260311  735340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:18:40.264196  735340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:18:40.264263  735340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:18:40.305506  735340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:18:40.314074  735340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:18:40.317679  735340 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:18:40.317751  735340 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:18:40.317844  735340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:18:40.317905  735340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:18:40.347414  735340 cri.go:89] found id: ""
	I1123 11:18:40.347486  735340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:18:40.361061  735340 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:18:40.370838  735340 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:18:40.370903  735340 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:18:40.382218  735340 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:18:40.382250  735340 kubeadm.go:158] found existing configuration files:
	
	I1123 11:18:40.382303  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 11:18:40.391652  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:18:40.391715  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:18:40.400294  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 11:18:40.410461  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:18:40.410546  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:18:40.418315  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 11:18:40.426205  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:18:40.426302  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:18:40.435523  735340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 11:18:40.443634  735340 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:18:40.443699  735340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:18:40.451466  735340 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:18:40.489796  735340 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 11:18:40.490024  735340 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:18:40.519763  735340 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:18:40.519908  735340 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:18:40.519968  735340 kubeadm.go:319] OS: Linux
	I1123 11:18:40.520044  735340 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:18:40.520125  735340 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:18:40.520203  735340 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:18:40.520286  735340 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:18:40.520364  735340 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:18:40.520446  735340 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:18:40.520530  735340 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:18:40.520616  735340 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:18:40.520698  735340 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:18:40.590882  735340 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:18:40.591046  735340 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:18:40.591148  735340 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 11:18:40.599752  735340 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:18:40.605175  735340 out.go:252]   - Generating certificates and keys ...
	I1123 11:18:40.605340  735340 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:18:40.605459  735340 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 11:18:40.782680  735340 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:18:40.837154  735340 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:18:41.733658  735340 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:18:41.960830  735340 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:18:42.450442  735340 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:18:42.450619  735340 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-103096 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:18:43.540437  735340 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:18:43.540921  735340 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-103096 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 11:18:43.697559  735340 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:18:44.878159  735340 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:18:45.430325  735340 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:18:45.430652  735340 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:18:45.699467  735340 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:18:46.541371  735340 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 11:18:46.665127  735340 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:18:47.071889  735340 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:18:47.762623  735340 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:18:47.763368  735340 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:18:47.766112  735340 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 11:18:47.769637  735340 out.go:252]   - Booting up control plane ...
	I1123 11:18:47.769766  735340 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:18:47.769878  735340 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:18:47.769969  735340 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:18:47.793950  735340 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:18:47.794068  735340 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 11:18:47.802964  735340 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 11:18:47.803072  735340 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:18:47.805427  735340 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:18:47.962816  735340 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 11:18:47.962938  735340 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 11:18:49.467919  735340 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501570403s
	I1123 11:18:49.468093  735340 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 11:18:49.468185  735340 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1123 11:18:49.468276  735340 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 11:18:49.468362  735340 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 11:18:53.504019  735340 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.036194713s
	I1123 11:18:55.188925  735340 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.721284027s
	
	
	==> CRI-O <==
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.876561623Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.881211576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.881246522Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.881266757Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.885812961Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.8858479Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.885874124Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.89209214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.892126299Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.892151924Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.895926754Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:18:39 embed-certs-715679 crio[654]: time="2025-11-23T11:18:39.895958131Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.962440658Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9942fbdf-97b3-4de1-b49e-484e383010c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.965125647Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b20ef978-a1c4-4d37-bb96-da8f60660669 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.968235598Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper" id=9631d9f0-077f-451a-a901-572d78bc50f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.968417748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.985630043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:48 embed-certs-715679 crio[654]: time="2025-11-23T11:18:48.986659878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.028761128Z" level=info msg="Created container c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper" id=9631d9f0-077f-451a-a901-572d78bc50f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.034233779Z" level=info msg="Starting container: c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986" id=5fe7f6f6-fe38-41ff-bbc4-f337d54493e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.038974014Z" level=info msg="Started container" PID=1715 containerID=c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper id=5fe7f6f6-fe38-41ff-bbc4-f337d54493e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06e66ba8280f058b706a5e01400a330ddd899b4371c9b8506409a133acd295c9
	Nov 23 11:18:49 embed-certs-715679 conmon[1712]: conmon c66279f8f6e240c97df3 <ninfo>: container 1715 exited with status 1
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.37983805Z" level=info msg="Removing container: a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670" id=d088b57e-6467-4fea-bcff-2dfcc597bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.389755798Z" level=info msg="Error loading conmon cgroup of container a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670: cgroup deleted" id=d088b57e-6467-4fea-bcff-2dfcc597bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:18:49 embed-certs-715679 crio[654]: time="2025-11-23T11:18:49.393206558Z" level=info msg="Removed container a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65/dashboard-metrics-scraper" id=d088b57e-6467-4fea-bcff-2dfcc597bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c66279f8f6e24       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   06e66ba8280f0       dashboard-metrics-scraper-6ffb444bf9-pqt65   kubernetes-dashboard
	0635b3b4249e8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   bdac71b37f1d6       storage-provisioner                          kube-system
	2cf450fb7ea4a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   925a16ac680b4       kubernetes-dashboard-855c9754f9-jz7sf        kubernetes-dashboard
	6d43e5477c835       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   7204d9966bf4c       coredns-66bc5c9577-9gghc                     kube-system
	a25ac353726d3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   d1be927d705c6       busybox                                      default
	fa13ac96e1521       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   bdac71b37f1d6       storage-provisioner                          kube-system
	2e9c10cadc1c9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   54a3dbde4bf04       kindnet-gh5h2                                kube-system
	75d7b06e8aa7d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   1e6e55ed3f8b9       kube-proxy-84tx6                             kube-system
	20df221b7dfb3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   808b5cf266285       kube-scheduler-embed-certs-715679            kube-system
	3705907a0fd2a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6a0a3d8867a3a       kube-apiserver-embed-certs-715679            kube-system
	c20c209f3dc2b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   31ae4bf9c577e       kube-controller-manager-embed-certs-715679   kube-system
	d4260b2942288       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   bfb7a0c0ce678       etcd-embed-certs-715679                      kube-system
	
	
	==> coredns [6d43e5477c8354b480be323d501bde9ccdf2ce5fb0a610110f36cc963145e4b4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39641 - 33274 "HINFO IN 1885163275542618875.5777297666075694139. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033896281s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-715679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-715679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-715679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_16_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:16:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-715679
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:18:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:18:29 +0000   Sun, 23 Nov 2025 11:17:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-715679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0f9e54f4-bafa-460f-a78e-697026168606
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-9gghc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-embed-certs-715679                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-gh5h2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-embed-certs-715679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-embed-certs-715679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-84tx6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-embed-certs-715679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pqt65    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jz7sf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m24s                  node-controller  Node embed-certs-715679 event: Registered Node embed-certs-715679 in Controller
	  Normal   NodeReady                101s                   kubelet          Node embed-certs-715679 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node embed-certs-715679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node embed-certs-715679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node embed-certs-715679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-715679 event: Registered Node embed-certs-715679 in Controller
	
	
	==> dmesg <==
	[Nov23 10:59] overlayfs: idmapped layers are currently not supported
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d4260b294228835eee6fa398c0acc73e7c5e3063b52483fb95cfd3e2c8d0cb77] <==
	{"level":"warn","ts":"2025-11-23T11:17:56.529535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.588295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.640156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.704172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.728654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.745521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.769297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.793663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.805071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.827789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.853122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.868601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.889395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.907450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.924434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.947964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.961536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:56.979271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.009868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.013969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.031415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.052678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.076387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.116858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:17:57.193474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36408","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:18:58 up  4:01,  0 user,  load average: 3.49, 3.52, 2.97
	Linux embed-certs-715679 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e9c10cadc1c93a0579863766c9dd59aaf1ebf2733e6a3127e1e121114213768] <==
	I1123 11:17:59.695838       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:17:59.696216       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:17:59.696775       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:17:59.696845       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:17:59.696892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:17:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:17:59.875571       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:17:59.875589       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:17:59.875598       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:17:59.876299       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:18:29.875693       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:18:29.875978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:18:29.876231       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:18:29.876369       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:18:31.276110       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:18:31.276143       1 metrics.go:72] Registering metrics
	I1123 11:18:31.276207       1 controller.go:711] "Syncing nftables rules"
	I1123 11:18:39.876253       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:18:39.876316       1 main.go:301] handling current node
	I1123 11:18:49.883255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 11:18:49.883294       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3705907a0fd2afd823aab9cf790cd7cbe11c78e937bd2144bafe03ce3ae8913c] <==
	I1123 11:17:58.530468       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:17:58.530474       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:17:58.534076       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:17:58.534089       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 11:17:58.545299       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 11:17:58.545353       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:17:58.545478       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:17:58.545680       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 11:17:58.545732       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:17:58.549005       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 11:17:58.553694       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:17:58.568357       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:17:58.570332       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1123 11:17:58.584644       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:17:58.958460       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:17:59.025267       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:17:59.365010       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:17:59.566452       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:17:59.680684       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:17:59.756661       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:17:59.884097       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.188.128"}
	I1123 11:17:59.919942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.46.78"}
	I1123 11:18:02.482164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:18:02.887223       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:18:03.032976       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c20c209f3dc2baa15a537d778f7bcaa21c1a0e5778f19fb4930042fa54f7c132] <==
	I1123 11:18:02.489126       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:18:02.489754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:18:02.490238       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:18:02.491279       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:18:02.491533       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-715679"
	I1123 11:18:02.491677       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:18:02.495116       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:18:02.496324       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 11:18:02.497836       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 11:18:02.502095       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 11:18:02.503288       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:18:02.505086       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 11:18:02.509657       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:18:02.511083       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:18:02.514598       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:18:02.516983       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:18:02.520324       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:18:02.525325       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 11:18:02.525336       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:18:02.525456       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:18:02.526040       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:18:02.526087       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:18:02.526104       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:18:02.526567       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:18:02.541499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [75d7b06e8aa7dcd731688456f75103f5b70f9d0a304f7bc68eb282728b5c6cd5] <==
	I1123 11:17:59.776768       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:17:59.955903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:18:00.197957       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:18:00.201194       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:18:00.201317       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:18:00.272471       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:18:00.272615       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:18:00.286484       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:18:00.286983       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:18:00.287001       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:18:00.289226       1 config.go:200] "Starting service config controller"
	I1123 11:18:00.289357       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:18:00.289455       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:18:00.289498       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:18:00.289556       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:18:00.289598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:18:00.294824       1 config.go:309] "Starting node config controller"
	I1123 11:18:00.294931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:18:00.294966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:18:00.389644       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:18:00.389744       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:18:00.389659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20df221b7dfb3ece226ab60848a3397d3f42e4fc7e2292d50c22f6f58131c199] <==
	I1123 11:17:56.331071       1 serving.go:386] Generated self-signed cert in-memory
	I1123 11:17:58.691651       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:17:58.695913       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:17:58.710541       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 11:17:58.710579       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 11:17:58.710617       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:58.710623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:17:58.710636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:17:58.710643       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:17:58.728332       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:17:58.732372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:17:58.819410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:17:58.819471       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 11:17:58.819564       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:18:07 embed-certs-715679 kubelet[782]: I1123 11:18:07.297023     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 11:18:08 embed-certs-715679 kubelet[782]: I1123 11:18:08.247766     782 scope.go:117] "RemoveContainer" containerID="dd179f2b97e7ee363d054856bdd28a53cff4cc38aacd6faa8fe879a4264ce0c8"
	Nov 23 11:18:09 embed-certs-715679 kubelet[782]: I1123 11:18:09.253513     782 scope.go:117] "RemoveContainer" containerID="dd179f2b97e7ee363d054856bdd28a53cff4cc38aacd6faa8fe879a4264ce0c8"
	Nov 23 11:18:09 embed-certs-715679 kubelet[782]: I1123 11:18:09.254191     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:09 embed-certs-715679 kubelet[782]: E1123 11:18:09.254497     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:10 embed-certs-715679 kubelet[782]: I1123 11:18:10.255329     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:10 embed-certs-715679 kubelet[782]: E1123 11:18:10.255503     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:13 embed-certs-715679 kubelet[782]: I1123 11:18:13.281379     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jz7sf" podStartSLOduration=1.090559548 podStartE2EDuration="10.280699811s" podCreationTimestamp="2025-11-23 11:18:03 +0000 UTC" firstStartedPulling="2025-11-23 11:18:03.535441886 +0000 UTC m=+11.791551255" lastFinishedPulling="2025-11-23 11:18:12.725582149 +0000 UTC m=+20.981691518" observedRunningTime="2025-11-23 11:18:13.280448055 +0000 UTC m=+21.536557440" watchObservedRunningTime="2025-11-23 11:18:13.280699811 +0000 UTC m=+21.536809180"
	Nov 23 11:18:13 embed-certs-715679 kubelet[782]: I1123 11:18:13.424110     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:13 embed-certs-715679 kubelet[782]: E1123 11:18:13.424324     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:23 embed-certs-715679 kubelet[782]: I1123 11:18:23.961745     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:24 embed-certs-715679 kubelet[782]: I1123 11:18:24.303395     782 scope.go:117] "RemoveContainer" containerID="6b12c193b39a9ba3917031caacc312cde25eb17a0a9c8a594811e2f07db3b97f"
	Nov 23 11:18:24 embed-certs-715679 kubelet[782]: I1123 11:18:24.303701     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:24 embed-certs-715679 kubelet[782]: E1123 11:18:24.303858     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:30 embed-certs-715679 kubelet[782]: I1123 11:18:30.322215     782 scope.go:117] "RemoveContainer" containerID="fa13ac96e1521657e764697d7ba6ea5ca642fe85f9ffe908b95e26442c09866b"
	Nov 23 11:18:33 embed-certs-715679 kubelet[782]: I1123 11:18:33.424192     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:33 embed-certs-715679 kubelet[782]: E1123 11:18:33.424388     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:48 embed-certs-715679 kubelet[782]: I1123 11:18:48.960979     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:49 embed-certs-715679 kubelet[782]: I1123 11:18:49.378446     782 scope.go:117] "RemoveContainer" containerID="a4bb691a35b0ebf7e0e7af72fc0672cb0675212764693e19da78d20bd3740670"
	Nov 23 11:18:50 embed-certs-715679 kubelet[782]: I1123 11:18:50.382391     782 scope.go:117] "RemoveContainer" containerID="c66279f8f6e240c97df37b6dd8235c0ae1d24f5de15a3ddc5f3d14e663988986"
	Nov 23 11:18:50 embed-certs-715679 kubelet[782]: E1123 11:18:50.383243     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pqt65_kubernetes-dashboard(58636cea-3dcd-47bf-8de9-409e2da12fc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pqt65" podUID="58636cea-3dcd-47bf-8de9-409e2da12fc5"
	Nov 23 11:18:52 embed-certs-715679 kubelet[782]: E1123 11:18:52.199650     782 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio-06e66ba8280f058b706a5e01400a330ddd899b4371c9b8506409a133acd295c9\": RecentStats: unable to find data in memory cache]"
	Nov 23 11:18:52 embed-certs-715679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:18:52 embed-certs-715679 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:18:52 embed-certs-715679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2cf450fb7ea4ad6a81a7878a6098c4aab3262b246b1d1326e5ce26be1e08beba] <==
	2025/11/23 11:18:12 Starting overwatch
	2025/11/23 11:18:12 Using namespace: kubernetes-dashboard
	2025/11/23 11:18:12 Using in-cluster config to connect to apiserver
	2025/11/23 11:18:12 Using secret token for csrf signing
	2025/11/23 11:18:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:18:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:18:12 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 11:18:12 Generating JWE encryption key
	2025/11/23 11:18:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:18:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:18:13 Initializing JWE encryption key from synchronized object
	2025/11/23 11:18:13 Creating in-cluster Sidecar client
	2025/11/23 11:18:13 Serving insecurely on HTTP port: 9090
	2025/11/23 11:18:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:18:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0635b3b4249e89f567cbfcf4fca7e7c36f6918fc08b8db8d3517ee5cc414b46a] <==
	I1123 11:18:30.439250       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:18:30.439304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:18:30.445838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:33.900799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:38.161242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:41.760150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:44.814843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:47.837229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:47.844635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:47.845184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:18:47.845496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-715679_f4a29ed1-26c5-4062-9639-543f68ec6c6e!
	I1123 11:18:47.846484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69cca960-8539-4f65-91a5-a2434eb78e5c", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-715679_f4a29ed1-26c5-4062-9639-543f68ec6c6e became leader
	W1123 11:18:47.855131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:47.862719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:18:47.947685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-715679_f4a29ed1-26c5-4062-9639-543f68ec6c6e!
	W1123 11:18:49.866932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:49.873043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:51.878075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:51.902200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:53.904996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:53.921658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:55.925874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:55.940262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:57.943412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:18:57.955586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fa13ac96e1521657e764697d7ba6ea5ca642fe85f9ffe908b95e26442c09866b] <==
	I1123 11:17:59.808173       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:18:29.827077       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-715679 -n embed-certs-715679
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-715679 -n embed-certs-715679: exit status 2 (526.578471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-715679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (295.068477ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:19:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-058071
helpers_test.go:243: (dbg) docker inspect newest-cni-058071:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559",
	        "Created": "2025-11-23T11:19:09.249053007Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739651,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:19:09.328430469Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/hostname",
	        "HostsPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/hosts",
	        "LogPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559-json.log",
	        "Name": "/newest-cni-058071",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-058071:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-058071",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559",
	                "LowerDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-058071",
	                "Source": "/var/lib/docker/volumes/newest-cni-058071/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-058071",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-058071",
	                "name.minikube.sigs.k8s.io": "newest-cni-058071",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a3f0010fb5eadd88deaf3f07254268036eb34e5170bd7d91ce935caebbca1d5",
	            "SandboxKey": "/var/run/docker/netns/3a3f0010fb5e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-058071": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:5c:b7:72:77:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2ad1d74afe18771af1930500adbc0606f203b00728de9cd7c808850d196bbca",
	                    "EndpointID": "6dc52e061e83a7f6b470395a74f51f639c3ca5044f2472986b529c45c8d6cf7f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-058071",
	                        "80b941940765"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-058071 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-058071 logs -n 25: (1.210779649s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ delete  │ -p old-k8s-version-378086                                                                                                                                                                                                                     │ old-k8s-version-378086       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:16 UTC │
	│ delete  │ -p cert-expiration-629387                                                                                                                                                                                                                     │ cert-expiration-629387       │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:15 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p no-preload-258179 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:19:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:19:03.126804  738941 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:19:03.126968  738941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:03.126975  738941 out.go:374] Setting ErrFile to fd 2...
	I1123 11:19:03.126980  738941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:03.127238  738941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:19:03.127675  738941 out.go:368] Setting JSON to false
	I1123 11:19:03.128651  738941 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14492,"bootTime":1763882251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:19:03.128718  738941 start.go:143] virtualization:  
	I1123 11:19:03.134375  738941 out.go:179] * [newest-cni-058071] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:19:03.137575  738941 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:19:03.137657  738941 notify.go:221] Checking for updates...
	I1123 11:19:03.144744  738941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:19:03.147859  738941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:03.150894  738941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:19:03.154190  738941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:19:03.157202  738941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:19:03.160747  738941 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:03.160900  738941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:19:03.211816  738941 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:19:03.211941  738941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:03.325060  738941 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:03.312403478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:03.325190  738941 docker.go:319] overlay module found
	I1123 11:19:03.328355  738941 out.go:179] * Using the docker driver based on user configuration
	I1123 11:19:03.331276  738941 start.go:309] selected driver: docker
	I1123 11:19:03.331298  738941 start.go:927] validating driver "docker" against <nil>
	I1123 11:19:03.331327  738941 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:19:03.332060  738941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:03.410258  738941 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:03.398951216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:03.410408  738941 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 11:19:03.410433  738941 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 11:19:03.410654  738941 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:19:03.413449  738941 out.go:179] * Using Docker driver with root privileges
	I1123 11:19:03.416334  738941 cni.go:84] Creating CNI manager for ""
	I1123 11:19:03.416413  738941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:03.416430  738941 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 11:19:03.416523  738941 start.go:353] cluster config:
	{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:03.420493  738941 out.go:179] * Starting "newest-cni-058071" primary control-plane node in "newest-cni-058071" cluster
	I1123 11:19:03.423310  738941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:19:03.426328  738941 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:19:03.429236  738941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:03.429306  738941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:19:03.429319  738941 cache.go:65] Caching tarball of preloaded images
	I1123 11:19:03.429328  738941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:19:03.429461  738941 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:19:03.429474  738941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:19:03.429639  738941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:03.429659  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json: {Name:mk7f47fa5ad9d2a149a975f497386b2e7d7edc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:03.452502  738941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:19:03.452521  738941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:19:03.452535  738941 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:19:03.452565  738941 start.go:360] acquireMachinesLock for newest-cni-058071: {Name:mkcc8b04939d321e7fa14f673dfa688f531ff5df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:19:03.452663  738941 start.go:364] duration metric: took 82.792µs to acquireMachinesLock for "newest-cni-058071"
	I1123 11:19:03.453068  738941 start.go:93] Provisioning new machine with config: &{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:19:03.453166  738941 start.go:125] createHost starting for "" (driver="docker")
	I1123 11:19:01.257492  735340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:01.757576  735340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:02.258365  735340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:02.757552  735340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:03.258149  735340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:03.757881  735340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:03.927223  735340 kubeadm.go:1114] duration metric: took 4.523651594s to wait for elevateKubeSystemPrivileges
	I1123 11:19:03.927250  735340 kubeadm.go:403] duration metric: took 23.609522087s to StartCluster
	I1123 11:19:03.927267  735340 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:03.927326  735340 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:03.927972  735340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:03.928164  735340 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:19:03.928309  735340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 11:19:03.928565  735340 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:03.928607  735340 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:19:03.928669  735340 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103096"
	I1123 11:19:03.928683  735340 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103096"
	I1123 11:19:03.928703  735340 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:19:03.929208  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:19:03.929651  735340 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103096"
	I1123 11:19:03.929681  735340 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103096"
	I1123 11:19:03.929944  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:19:03.938187  735340 out.go:179] * Verifying Kubernetes components...
	I1123 11:19:03.941594  735340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:03.974687  735340 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103096"
	I1123 11:19:03.974728  735340 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:19:03.982189  735340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:19:04.003787  735340 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:19:04.007041  735340 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:04.007063  735340 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:19:04.007126  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:19:04.007771  735340 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:04.007789  735340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:19:04.007840  735340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:19:04.048889  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:19:04.054906  735340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:19:04.463850  735340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:04.601785  735340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:04.646594  735340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 11:19:04.646711  735340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:05.904409  735340 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.302590378s)
	I1123 11:19:05.904668  735340 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.257942175s)
	I1123 11:19:05.905400  735340 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:19:05.905668  735340 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.259048433s)
	I1123 11:19:05.905684  735340 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 11:19:05.908832  735340 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 11:19:03.456553  738941 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 11:19:03.456782  738941 start.go:159] libmachine.API.Create for "newest-cni-058071" (driver="docker")
	I1123 11:19:03.456815  738941 client.go:173] LocalClient.Create starting
	I1123 11:19:03.456880  738941 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem
	I1123 11:19:03.456917  738941 main.go:143] libmachine: Decoding PEM data...
	I1123 11:19:03.456932  738941 main.go:143] libmachine: Parsing certificate...
	I1123 11:19:03.457021  738941 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem
	I1123 11:19:03.457050  738941 main.go:143] libmachine: Decoding PEM data...
	I1123 11:19:03.457095  738941 main.go:143] libmachine: Parsing certificate...
	I1123 11:19:03.457546  738941 cli_runner.go:164] Run: docker network inspect newest-cni-058071 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 11:19:03.477909  738941 cli_runner.go:211] docker network inspect newest-cni-058071 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 11:19:03.477984  738941 network_create.go:284] running [docker network inspect newest-cni-058071] to gather additional debugging logs...
	I1123 11:19:03.478006  738941 cli_runner.go:164] Run: docker network inspect newest-cni-058071
	W1123 11:19:03.495176  738941 cli_runner.go:211] docker network inspect newest-cni-058071 returned with exit code 1
	I1123 11:19:03.495210  738941 network_create.go:287] error running [docker network inspect newest-cni-058071]: docker network inspect newest-cni-058071: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-058071 not found
	I1123 11:19:03.495223  738941 network_create.go:289] output of [docker network inspect newest-cni-058071]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-058071 not found
	
	** /stderr **
	I1123 11:19:03.495367  738941 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:19:03.513709  738941 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
	I1123 11:19:03.514062  738941 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6aa8d6e10592 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:61:e9:d9:d2:34} reservation:<nil>}
	I1123 11:19:03.514418  738941 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b955e06248a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:f3:13:23:8c:71} reservation:<nil>}
	I1123 11:19:03.514866  738941 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019df750}
	I1123 11:19:03.514904  738941 network_create.go:124] attempt to create docker network newest-cni-058071 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 11:19:03.514959  738941 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-058071 newest-cni-058071
	I1123 11:19:03.579804  738941 network_create.go:108] docker network newest-cni-058071 192.168.76.0/24 created
	I1123 11:19:03.579838  738941 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-058071" container
	I1123 11:19:03.579911  738941 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 11:19:03.596621  738941 cli_runner.go:164] Run: docker volume create newest-cni-058071 --label name.minikube.sigs.k8s.io=newest-cni-058071 --label created_by.minikube.sigs.k8s.io=true
	I1123 11:19:03.613667  738941 oci.go:103] Successfully created a docker volume newest-cni-058071
	I1123 11:19:03.613755  738941 cli_runner.go:164] Run: docker run --rm --name newest-cni-058071-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-058071 --entrypoint /usr/bin/test -v newest-cni-058071:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 11:19:04.361082  738941 oci.go:107] Successfully prepared a docker volume newest-cni-058071
	I1123 11:19:04.361145  738941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:04.361154  738941 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 11:19:04.361224  738941 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-058071:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 11:19:05.911801  735340 addons.go:530] duration metric: took 1.983188083s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 11:19:06.410922  735340 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-103096" context rescaled to 1 replicas
	W1123 11:19:07.908525  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:09.915369  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:09.182518  738941 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-058071:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.821251612s)
	I1123 11:19:09.182559  738941 kic.go:203] duration metric: took 4.8214004s to extract preloaded images to volume ...
	W1123 11:19:09.182709  738941 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:19:09.182814  738941 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:19:09.234228  738941 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-058071 --name newest-cni-058071 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-058071 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-058071 --network newest-cni-058071 --ip 192.168.76.2 --volume newest-cni-058071:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:19:09.575205  738941 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Running}}
	I1123 11:19:09.595800  738941 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:09.619968  738941 cli_runner.go:164] Run: docker exec newest-cni-058071 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:19:09.667542  738941 oci.go:144] the created container "newest-cni-058071" has a running status.
	I1123 11:19:09.667568  738941 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa...
	I1123 11:19:09.936587  738941 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:19:09.959603  738941 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:09.979919  738941 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:19:09.979939  738941 kic_runner.go:114] Args: [docker exec --privileged newest-cni-058071 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:19:10.064215  738941 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:10.108325  738941 machine.go:94] provisionDockerMachine start ...
	I1123 11:19:10.108429  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:10.145714  738941 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:10.146143  738941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1123 11:19:10.146161  738941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:19:10.146806  738941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46760->127.0.0.1:33827: read: connection reset by peer
	W1123 11:19:12.408660  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:14.408967  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:13.297110  738941 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:13.297134  738941 ubuntu.go:182] provisioning hostname "newest-cni-058071"
	I1123 11:19:13.297215  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:13.314133  738941 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:13.314452  738941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1123 11:19:13.314470  738941 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-058071 && echo "newest-cni-058071" | sudo tee /etc/hostname
	I1123 11:19:13.474938  738941 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:13.475121  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:13.494356  738941 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:13.494703  738941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1123 11:19:13.494730  738941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-058071' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-058071/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-058071' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:19:13.649740  738941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:19:13.649768  738941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:19:13.649793  738941 ubuntu.go:190] setting up certificates
	I1123 11:19:13.649803  738941 provision.go:84] configureAuth start
	I1123 11:19:13.649880  738941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:13.667761  738941 provision.go:143] copyHostCerts
	I1123 11:19:13.667845  738941 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:19:13.667863  738941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:19:13.667983  738941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:19:13.668104  738941 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:19:13.668117  738941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:19:13.668153  738941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:19:13.668216  738941 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:19:13.668226  738941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:19:13.668253  738941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:19:13.668313  738941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.newest-cni-058071 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-058071]
	I1123 11:19:13.783532  738941 provision.go:177] copyRemoteCerts
	I1123 11:19:13.783599  738941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:19:13.783638  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:13.802163  738941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:13.909838  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:19:13.928754  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:19:13.948868  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:19:13.966510  738941 provision.go:87] duration metric: took 316.684189ms to configureAuth
	I1123 11:19:13.966552  738941 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:19:13.966792  738941 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:13.966901  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:13.987897  738941 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:13.988218  738941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1123 11:19:13.988237  738941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:19:14.301514  738941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:19:14.301533  738941 machine.go:97] duration metric: took 4.193186252s to provisionDockerMachine
	I1123 11:19:14.301542  738941 client.go:176] duration metric: took 10.844720985s to LocalClient.Create
	I1123 11:19:14.301555  738941 start.go:167] duration metric: took 10.844774853s to libmachine.API.Create "newest-cni-058071"
	I1123 11:19:14.301562  738941 start.go:293] postStartSetup for "newest-cni-058071" (driver="docker")
	I1123 11:19:14.301571  738941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:19:14.301629  738941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:19:14.301666  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:14.319467  738941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:14.426687  738941 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:19:14.430262  738941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:19:14.430342  738941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:19:14.430359  738941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:19:14.430427  738941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:19:14.430511  738941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:19:14.430632  738941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:19:14.438282  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:14.456229  738941 start.go:296] duration metric: took 154.653231ms for postStartSetup
	I1123 11:19:14.456576  738941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:14.476339  738941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:14.476692  738941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:19:14.476740  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:14.494951  738941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:14.598748  738941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:19:14.603733  738941 start.go:128] duration metric: took 11.150550607s to createHost
	I1123 11:19:14.603763  738941 start.go:83] releasing machines lock for "newest-cni-058071", held for 11.151091191s
	I1123 11:19:14.603855  738941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:14.620545  738941 ssh_runner.go:195] Run: cat /version.json
	I1123 11:19:14.620558  738941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:19:14.620599  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:14.620614  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:14.640648  738941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:14.662470  738941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:14.838884  738941 ssh_runner.go:195] Run: systemctl --version
	I1123 11:19:14.845250  738941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:19:14.880893  738941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:19:14.885068  738941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:19:14.885163  738941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:19:14.915993  738941 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 11:19:14.916020  738941 start.go:496] detecting cgroup driver to use...
	I1123 11:19:14.916061  738941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:19:14.916123  738941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:19:14.936500  738941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:19:14.951361  738941 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:19:14.951460  738941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:19:14.971471  738941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:19:14.990651  738941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:19:15.128696  738941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:19:15.252445  738941 docker.go:234] disabling docker service ...
	I1123 11:19:15.252570  738941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:19:15.274210  738941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:19:15.287941  738941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:19:15.406718  738941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:19:15.532646  738941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:19:15.546693  738941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:19:15.562043  738941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:19:15.562162  738941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:15.570920  738941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:19:15.570991  738941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:15.579648  738941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:15.588659  738941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:15.597359  738941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:19:15.605638  738941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:15.614197  738941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:15.629235  738941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:15.637951  738941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:19:15.645562  738941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:19:15.653976  738941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:15.770964  738941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:19:15.928819  738941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:19:15.928940  738941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:19:15.932796  738941 start.go:564] Will wait 60s for crictl version
	I1123 11:19:15.932934  738941 ssh_runner.go:195] Run: which crictl
	I1123 11:19:15.936329  738941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:19:15.966349  738941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:19:15.966486  738941 ssh_runner.go:195] Run: crio --version
	I1123 11:19:15.994685  738941 ssh_runner.go:195] Run: crio --version
	I1123 11:19:16.038779  738941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:19:16.041744  738941 cli_runner.go:164] Run: docker network inspect newest-cni-058071 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:19:16.058559  738941 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:19:16.063643  738941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:16.078613  738941 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 11:19:16.081447  738941 kubeadm.go:884] updating cluster {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:19:16.081602  738941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:16.081684  738941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:16.121866  738941 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:16.121892  738941 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:19:16.121954  738941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:16.149983  738941 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:16.150014  738941 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:19:16.150021  738941 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:19:16.150113  738941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-058071 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:19:16.150201  738941 ssh_runner.go:195] Run: crio config
	I1123 11:19:16.207433  738941 cni.go:84] Creating CNI manager for ""
	I1123 11:19:16.207458  738941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:16.207476  738941 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 11:19:16.207528  738941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-058071 NodeName:newest-cni-058071 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:19:16.207684  738941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-058071"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:19:16.207755  738941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:19:16.216540  738941 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:19:16.216674  738941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:19:16.224517  738941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 11:19:16.237248  738941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:19:16.250288  738941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 11:19:16.263345  738941 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:19:16.267981  738941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:16.277936  738941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:16.391843  738941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:16.410819  738941 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071 for IP: 192.168.76.2
	I1123 11:19:16.410894  738941 certs.go:195] generating shared ca certs ...
	I1123 11:19:16.410918  738941 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:16.411080  738941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:19:16.411124  738941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:19:16.411134  738941 certs.go:257] generating profile certs ...
	I1123 11:19:16.411187  738941 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.key
	I1123 11:19:16.411202  738941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.crt with IP's: []
	I1123 11:19:16.543504  738941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.crt ...
	I1123 11:19:16.543536  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.crt: {Name:mkc488371a23ebd2dead4aee3ee387509f52912c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:16.543746  738941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.key ...
	I1123 11:19:16.543761  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.key: {Name:mkdada2120f5dec7fe6bf8ecea95ab955c0c544a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:16.543865  738941 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key.cc862dfe
	I1123 11:19:16.543882  738941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt.cc862dfe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 11:19:16.794126  738941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt.cc862dfe ...
	I1123 11:19:16.794159  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt.cc862dfe: {Name:mk352d7b828661075551a632c10cfaf01e61adb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:16.794348  738941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key.cc862dfe ...
	I1123 11:19:16.794364  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key.cc862dfe: {Name:mk0e61e3186ea982266d8d5260b45f8716e61607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:16.794446  738941 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt.cc862dfe -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt
	I1123 11:19:16.794537  738941 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key.cc862dfe -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key
	I1123 11:19:16.794602  738941 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key
	I1123 11:19:16.794629  738941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.crt with IP's: []
	I1123 11:19:16.944912  738941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.crt ...
	I1123 11:19:16.944944  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.crt: {Name:mk5284d5be455c7d9b45425721923adccc2944cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:16.945142  738941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key ...
	I1123 11:19:16.945162  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key: {Name:mk207f4cd8dde1c7fd1e15ea9112f99e72999c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:16.945349  738941 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:19:16.945399  738941 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:19:16.945431  738941 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:19:16.945459  738941 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:19:16.945495  738941 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:19:16.945525  738941 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:19:16.945579  738941 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:16.946512  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:19:16.968265  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:19:16.989550  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:19:17.009929  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:19:17.028979  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:19:17.046988  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:19:17.065091  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:19:17.083489  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:19:17.101658  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:19:17.120633  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:19:17.142055  738941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:19:17.165947  738941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:19:17.184273  738941 ssh_runner.go:195] Run: openssl version
	I1123 11:19:17.190909  738941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:19:17.200087  738941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:19:17.203820  738941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:19:17.203928  738941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:19:17.244882  738941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:19:17.253379  738941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:19:17.261397  738941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:17.265032  738941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:17.265128  738941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:17.307930  738941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:19:17.316317  738941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:19:17.324678  738941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:19:17.328476  738941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:19:17.328547  738941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:19:17.369159  738941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:19:17.377559  738941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:19:17.381119  738941 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:19:17.381174  738941 kubeadm.go:401] StartCluster: {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:17.381258  738941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:19:17.381320  738941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:19:17.415444  738941 cri.go:89] found id: ""
	I1123 11:19:17.415517  738941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:19:17.423123  738941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:19:17.431029  738941 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:19:17.431098  738941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:19:17.439011  738941 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:19:17.439081  738941 kubeadm.go:158] found existing configuration files:
	
	I1123 11:19:17.439170  738941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 11:19:17.446812  738941 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:19:17.446928  738941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:19:17.454443  738941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 11:19:17.462088  738941 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:19:17.462222  738941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:19:17.469503  738941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 11:19:17.477099  738941 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:19:17.477218  738941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:19:17.484569  738941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 11:19:17.492211  738941 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:19:17.492325  738941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:19:17.504634  738941 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:19:17.544330  738941 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 11:19:17.544396  738941 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:19:17.569228  738941 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:19:17.569307  738941 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:19:17.569347  738941 kubeadm.go:319] OS: Linux
	I1123 11:19:17.569401  738941 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:19:17.569506  738941 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:19:17.569558  738941 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:19:17.569610  738941 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:19:17.569661  738941 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:19:17.569714  738941 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:19:17.569762  738941 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:19:17.569814  738941 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:19:17.569863  738941 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:19:17.639957  738941 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:19:17.640073  738941 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:19:17.640170  738941 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 11:19:17.649945  738941 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:19:17.655330  738941 out.go:252]   - Generating certificates and keys ...
	I1123 11:19:17.655462  738941 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:19:17.655554  738941 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1123 11:19:16.909088  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:19.410253  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:18.171830  738941 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:19:18.326504  738941 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:19:18.754096  738941 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:19:19.059787  738941 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:19:19.729133  738941 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:19:19.729503  738941 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-058071] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:19:20.109111  738941 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:19:20.109556  738941 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-058071] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:19:20.362633  738941 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:19:20.609166  738941 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:19:20.940040  738941 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:19:20.940495  738941 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:19:22.157424  738941 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:19:22.454528  738941 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 11:19:23.404913  738941 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:19:24.448606  738941 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:19:24.631284  738941 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:19:24.631884  738941 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:19:24.634591  738941 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 11:19:21.909693  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:23.910337  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:24.637952  738941 out.go:252]   - Booting up control plane ...
	I1123 11:19:24.638082  738941 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:19:24.638163  738941 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:19:24.639496  738941 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:19:24.665359  738941 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:19:24.665502  738941 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 11:19:24.674043  738941 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 11:19:24.677125  738941 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:19:24.677214  738941 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:19:24.812891  738941 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 11:19:24.813015  738941 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 11:19:26.312495  738941 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501746098s
	I1123 11:19:26.314467  738941 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 11:19:26.314673  738941 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 11:19:26.314788  738941 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 11:19:26.315333  738941 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1123 11:19:26.409026  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:28.909276  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:30.784517  738941 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.468805602s
	I1123 11:19:32.475056  738941 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.159511153s
	I1123 11:19:32.817103  738941 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50230421s
	I1123 11:19:32.837688  738941 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 11:19:32.852483  738941 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 11:19:32.867843  738941 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 11:19:32.868055  738941 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-058071 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 11:19:32.886468  738941 kubeadm.go:319] [bootstrap-token] Using token: rkd7kl.lipnzrpqbdyuzy43
	I1123 11:19:32.889524  738941 out.go:252]   - Configuring RBAC rules ...
	I1123 11:19:32.889657  738941 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 11:19:32.895339  738941 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 11:19:32.910216  738941 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 11:19:32.915027  738941 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 11:19:32.921862  738941 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 11:19:32.931540  738941 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 11:19:33.224870  738941 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 11:19:33.685761  738941 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 11:19:34.227451  738941 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 11:19:34.228843  738941 kubeadm.go:319] 
	I1123 11:19:34.228923  738941 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 11:19:34.228932  738941 kubeadm.go:319] 
	I1123 11:19:34.229018  738941 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 11:19:34.229026  738941 kubeadm.go:319] 
	I1123 11:19:34.229083  738941 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 11:19:34.229175  738941 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 11:19:34.229230  738941 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 11:19:34.229237  738941 kubeadm.go:319] 
	I1123 11:19:34.229315  738941 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 11:19:34.229324  738941 kubeadm.go:319] 
	I1123 11:19:34.229372  738941 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 11:19:34.229382  738941 kubeadm.go:319] 
	I1123 11:19:34.229462  738941 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 11:19:34.229546  738941 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 11:19:34.229616  738941 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 11:19:34.229624  738941 kubeadm.go:319] 
	I1123 11:19:34.229716  738941 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 11:19:34.229800  738941 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 11:19:34.229809  738941 kubeadm.go:319] 
	I1123 11:19:34.229898  738941 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rkd7kl.lipnzrpqbdyuzy43 \
	I1123 11:19:34.230033  738941 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 11:19:34.230062  738941 kubeadm.go:319] 	--control-plane 
	I1123 11:19:34.230066  738941 kubeadm.go:319] 
	I1123 11:19:34.230170  738941 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 11:19:34.230181  738941 kubeadm.go:319] 
	I1123 11:19:34.230258  738941 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rkd7kl.lipnzrpqbdyuzy43 \
	I1123 11:19:34.230362  738941 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 11:19:34.234753  738941 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 11:19:34.235050  738941 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 11:19:34.235165  738941 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 11:19:34.235185  738941 cni.go:84] Creating CNI manager for ""
	I1123 11:19:34.235192  738941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:34.238221  738941 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 11:19:31.410015  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:33.909239  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:34.241072  738941 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 11:19:34.245147  738941 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 11:19:34.245169  738941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 11:19:34.259141  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 11:19:34.574403  738941 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 11:19:34.574555  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:34.574634  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-058071 minikube.k8s.io/updated_at=2025_11_23T11_19_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=newest-cni-058071 minikube.k8s.io/primary=true
	I1123 11:19:34.714063  738941 ops.go:34] apiserver oom_adj: -16
	I1123 11:19:34.760567  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:35.261600  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:35.760821  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:36.260676  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:36.761388  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:37.261251  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:37.760650  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:38.260656  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:38.761124  738941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:19:38.899416  738941 kubeadm.go:1114] duration metric: took 4.324923021s to wait for elevateKubeSystemPrivileges
	I1123 11:19:38.899443  738941 kubeadm.go:403] duration metric: took 21.518273778s to StartCluster
	I1123 11:19:38.899459  738941 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:38.899522  738941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:38.900483  738941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:38.900694  738941 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:19:38.900773  738941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 11:19:38.901011  738941 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:38.901047  738941 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:19:38.901107  738941 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-058071"
	I1123 11:19:38.901120  738941 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-058071"
	I1123 11:19:38.901138  738941 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:38.901936  738941 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:38.902307  738941 addons.go:70] Setting default-storageclass=true in profile "newest-cni-058071"
	I1123 11:19:38.902324  738941 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-058071"
	I1123 11:19:38.902585  738941 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:38.906887  738941 out.go:179] * Verifying Kubernetes components...
	I1123 11:19:38.909797  738941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:38.950888  738941 addons.go:239] Setting addon default-storageclass=true in "newest-cni-058071"
	I1123 11:19:38.950930  738941 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:38.951347  738941 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:38.967887  738941 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:19:38.971786  738941 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:38.971809  738941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:19:38.971873  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:38.980887  738941 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:38.980907  738941 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:19:38.980971  738941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:39.005670  738941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:39.022056  738941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:39.231973  738941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:39.272940  738941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 11:19:39.279575  738941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:39.288111  738941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:39.873618  738941 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 11:19:39.875422  738941 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:19:39.877058  738941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:19:39.902711  738941 api_server.go:72] duration metric: took 1.001984175s to wait for apiserver process to appear ...
	I1123 11:19:39.902792  738941 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:19:39.902825  738941 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:19:39.917653  738941 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:19:39.918181  738941 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 11:19:39.920233  738941 api_server.go:141] control plane version: v1.34.1
	I1123 11:19:39.920260  738941 api_server.go:131] duration metric: took 17.447327ms to wait for apiserver health ...
	I1123 11:19:39.920269  738941 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:19:39.921023  738941 addons.go:530] duration metric: took 1.019973431s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 11:19:39.927449  738941 system_pods.go:59] 9 kube-system pods found
	I1123 11:19:39.927488  738941 system_pods.go:61] "coredns-66bc5c9577-6hf9z" [19e3f951-c712-41c1-afc3-6f7ab757696c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:19:39.927496  738941 system_pods.go:61] "coredns-66bc5c9577-86c67" [654888ae-1968-446b-bc77-67add47f1646] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:19:39.927504  738941 system_pods.go:61] "etcd-newest-cni-058071" [880c7442-4504-4d3f-bd99-5da4d55fc969] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:19:39.927519  738941 system_pods.go:61] "kindnet-nhmmf" [3a4984b0-33ea-41b8-bcf0-371db0376a23] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 11:19:39.927530  738941 system_pods.go:61] "kube-apiserver-newest-cni-058071" [057ca3d0-73ae-4a19-91e6-c4d4be793d23] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:19:39.927541  738941 system_pods.go:61] "kube-controller-manager-newest-cni-058071" [1b498c1b-0b85-4f48-a741-21e62c3ee4b5] Running
	I1123 11:19:39.927547  738941 system_pods.go:61] "kube-proxy-k574z" [5d8ab6d1-c0c9-4f98-a624-cee178c49a77] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 11:19:39.927553  738941 system_pods.go:61] "kube-scheduler-newest-cni-058071" [b006970c-6ef8-4240-b994-0c68b254d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:19:39.927562  738941 system_pods.go:61] "storage-provisioner" [44fe1c1c-dd81-4733-a2e9-a014c419bd7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:19:39.927570  738941 system_pods.go:74] duration metric: took 7.294506ms to wait for pod list to return data ...
	I1123 11:19:39.927581  738941 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:19:39.934192  738941 default_sa.go:45] found service account: "default"
	I1123 11:19:39.934219  738941 default_sa.go:55] duration metric: took 6.631286ms for default service account to be created ...
	I1123 11:19:39.934233  738941 kubeadm.go:587] duration metric: took 1.033516544s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:19:39.934249  738941 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:19:39.947162  738941 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:19:39.947198  738941 node_conditions.go:123] node cpu capacity is 2
	I1123 11:19:39.947211  738941 node_conditions.go:105] duration metric: took 12.956638ms to run NodePressure ...
	I1123 11:19:39.947225  738941 start.go:242] waiting for startup goroutines ...
	I1123 11:19:40.379221  738941 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-058071" context rescaled to 1 replicas
	I1123 11:19:40.379260  738941 start.go:247] waiting for cluster config update ...
	I1123 11:19:40.379275  738941 start.go:256] writing updated cluster config ...
	I1123 11:19:40.379613  738941 ssh_runner.go:195] Run: rm -f paused
	I1123 11:19:40.441573  738941 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:19:40.445013  738941 out.go:179] * Done! kubectl is now configured to use "newest-cni-058071" cluster and "default" namespace by default
	W1123 11:19:36.409013  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:38.922690  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.595192231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.60304314Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dc167269-c754-404b-ab4d-6c1f49f1f7f6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.616319595Z" level=info msg="Ran pod sandbox 3f8d299749f3eb6e5a42941d4acdba59de86449adfd1135ebb00e1bc6d61a41a with infra container: kube-system/kindnet-nhmmf/POD" id=dc167269-c754-404b-ab4d-6c1f49f1f7f6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.618669993Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=91ede1fa-6053-4056-8bdd-0437291f775d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.619602216Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=449cf07c-08c0-4a3d-a067-f961cefc09a5 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.628583547Z" level=info msg="Creating container: kube-system/kindnet-nhmmf/kindnet-cni" id=894f191b-339c-49bf-a4e0-8f26edce8675 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.629301538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.639306433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.639807188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.66086196Z" level=info msg="Created container eb656ddd13813427a877fb8dada9763532ba34ccf111540621431ea1c6386c30: kube-system/kindnet-nhmmf/kindnet-cni" id=894f191b-339c-49bf-a4e0-8f26edce8675 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.664860708Z" level=info msg="Starting container: eb656ddd13813427a877fb8dada9763532ba34ccf111540621431ea1c6386c30" id=17f97ba3-0deb-486d-a795-eb132157a8e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.668572221Z" level=info msg="Started container" PID=1496 containerID=eb656ddd13813427a877fb8dada9763532ba34ccf111540621431ea1c6386c30 description=kube-system/kindnet-nhmmf/kindnet-cni id=17f97ba3-0deb-486d-a795-eb132157a8e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f8d299749f3eb6e5a42941d4acdba59de86449adfd1135ebb00e1bc6d61a41a
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.920310638Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-k574z/POD" id=abf0d38b-ec35-45af-975d-1ccabe113ab4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.920384412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.924193388Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=abf0d38b-ec35-45af-975d-1ccabe113ab4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.928203386Z" level=info msg="Ran pod sandbox 4570f9416bf440923f76bfb452bd4f038365c4120554713a242e694f7e1358ef with infra container: kube-system/kube-proxy-k574z/POD" id=abf0d38b-ec35-45af-975d-1ccabe113ab4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.929880652Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0a4c45cb-9767-4387-a5b5-14b347ceade3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.933154985Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=36586e1d-a519-4aab-8d05-a4438e5846bf name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.941862496Z" level=info msg="Creating container: kube-system/kube-proxy-k574z/kube-proxy" id=ea7f23e7-a0be-46f0-9f4a-f08164f00f2f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.941966335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.950600039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.951273302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.98259481Z" level=info msg="Created container eda20f53b4948d73c9ea353fb18bb194eb12eeee87a5c301a88573a7433321dd: kube-system/kube-proxy-k574z/kube-proxy" id=ea7f23e7-a0be-46f0-9f4a-f08164f00f2f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.983529643Z" level=info msg="Starting container: eda20f53b4948d73c9ea353fb18bb194eb12eeee87a5c301a88573a7433321dd" id=48ffe112-e310-48bd-b84f-2c18660b09e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:19:40 newest-cni-058071 crio[838]: time="2025-11-23T11:19:40.989389218Z" level=info msg="Started container" PID=1542 containerID=eda20f53b4948d73c9ea353fb18bb194eb12eeee87a5c301a88573a7433321dd description=kube-system/kube-proxy-k574z/kube-proxy id=48ffe112-e310-48bd-b84f-2c18660b09e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4570f9416bf440923f76bfb452bd4f038365c4120554713a242e694f7e1358ef
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	eda20f53b4948       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   Less than a second ago   Running             kube-proxy                0                   4570f9416bf44       kube-proxy-k574z                            kube-system
	eb656ddd13813       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago             Running             kindnet-cni               0                   3f8d299749f3e       kindnet-nhmmf                               kube-system
	42d60e6ef7436       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago           Running             kube-controller-manager   0                   ffa5a9b8f7b92       kube-controller-manager-newest-cni-058071   kube-system
	6831b592bc8cd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago           Running             kube-apiserver            0                   eaa0571f8e130       kube-apiserver-newest-cni-058071            kube-system
	ef642b1e6fda3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago           Running             etcd                      0                   d672e658451c8       etcd-newest-cni-058071                      kube-system
	9d2eba0b730b9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago           Running             kube-scheduler            0                   e0fb16fab2c0e       kube-scheduler-newest-cni-058071            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-058071
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-058071
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=newest-cni-058071
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_19_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:19:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-058071
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:19:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:19:33 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:19:33 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:19:33 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 11:19:33 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-058071
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                50c4c8d6-c4e7-4ed0-b751-2e5f93061714
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-058071                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-nhmmf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-058071             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-058071    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-k574z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-058071             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 0s    kube-proxy       
	  Normal   Starting                 9s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s    kubelet          Node newest-cni-058071 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s    kubelet          Node newest-cni-058071 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s    kubelet          Node newest-cni-058071 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s    node-controller  Node newest-cni-058071 event: Registered Node newest-cni-058071 in Controller
	
	
	==> dmesg <==
	[ +17.527359] overlayfs: idmapped layers are currently not supported
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	[Nov23 11:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ef642b1e6fda366b12a8e288d28af79b2da3822a87c79e628303b0ed6d318481] <==
	{"level":"warn","ts":"2025-11-23T11:19:28.622055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.648356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.670726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.691755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.720178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.724258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.742845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.772350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.802727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.834924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.885639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.951055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:28.973597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.005637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.032901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.066152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.091284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.139177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.187389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.223258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.252292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.298192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.311626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.332774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:29.472323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47446","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:19:42 up  4:02,  0 user,  load average: 3.45, 3.50, 2.99
	Linux newest-cni-058071 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eb656ddd13813427a877fb8dada9763532ba34ccf111540621431ea1c6386c30] <==
	I1123 11:19:40.861839       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:19:40.862088       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:19:40.862279       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:19:40.862328       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:19:40.862363       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:19:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:19:41.059584       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:19:41.059659       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:19:41.059692       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:19:41.060068       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6831b592bc8cdc1e963f3a677b15aefffb643dd7213a1eefc40b01171820e688] <==
	I1123 11:19:30.851322       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 11:19:30.851396       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:19:30.851430       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:19:30.944985       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:19:30.945063       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 11:19:30.966296       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:19:31.038499       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:19:31.057614       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:19:31.450284       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 11:19:31.458800       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 11:19:31.458824       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:19:32.335158       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:19:32.393583       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:19:32.532117       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 11:19:32.540051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 11:19:32.541335       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:19:32.547058       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:19:32.892444       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:19:33.658595       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:19:33.684404       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 11:19:33.697550       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 11:19:38.535411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:19:38.744215       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 11:19:38.961540       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:19:39.065077       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [42d60e6ef74362c742b1b39703ea15dae6affb27dfb695536f83464decb610b6] <==
	I1123 11:19:37.962106       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:19:37.963411       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:19:37.972713       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 11:19:37.972727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 11:19:37.974880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 11:19:37.974972       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 11:19:37.977293       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:19:37.977321       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:19:37.977330       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:19:37.979828       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:19:37.980015       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:19:37.980064       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 11:19:37.980184       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:19:37.980232       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:19:37.981386       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 11:19:37.981826       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:19:37.983225       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 11:19:37.984421       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:19:37.987926       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 11:19:37.988006       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 11:19:37.988044       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 11:19:37.988058       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 11:19:37.988065       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 11:19:37.995682       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:19:37.999572       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-058071" podCIDRs=["10.42.0.0/24"]
	
	
	==> kube-proxy [eda20f53b4948d73c9ea353fb18bb194eb12eeee87a5c301a88573a7433321dd] <==
	I1123 11:19:41.043080       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:19:41.123225       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:19:41.225029       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:19:41.225087       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:19:41.225151       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:19:41.253733       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:19:41.253786       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:19:41.258443       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:19:41.259208       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:19:41.259234       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:19:41.263068       1 config.go:200] "Starting service config controller"
	I1123 11:19:41.263148       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:19:41.263186       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:19:41.263224       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:19:41.263264       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:19:41.263290       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:19:41.263977       1 config.go:309] "Starting node config controller"
	I1123 11:19:41.269311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:19:41.269383       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:19:41.364182       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:19:41.364322       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:19:41.364374       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9d2eba0b730b98da64d624e79c929a22ef2a98f370f622d73534ce78b692f2c4] <==
	I1123 11:19:29.316941       1 serving.go:386] Generated self-signed cert in-memory
	I1123 11:19:32.455901       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:19:32.456002       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:19:32.462007       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:19:32.462208       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 11:19:32.462263       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 11:19:32.462319       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:19:32.472702       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:19:32.473464       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:19:32.473550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:19:32.473582       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:19:32.563094       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 11:19:32.574471       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:19:32.574471       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:19:34 newest-cni-058071 kubelet[1304]: I1123 11:19:34.858284    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-058071" podStartSLOduration=1.858266421 podStartE2EDuration="1.858266421s" podCreationTimestamp="2025-11-23 11:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:34.818593205 +0000 UTC m=+1.365510776" watchObservedRunningTime="2025-11-23 11:19:34.858266421 +0000 UTC m=+1.405183968"
	Nov 23 11:19:34 newest-cni-058071 kubelet[1304]: I1123 11:19:34.883239    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-058071" podStartSLOduration=1.883218248 podStartE2EDuration="1.883218248s" podCreationTimestamp="2025-11-23 11:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:34.860289755 +0000 UTC m=+1.407207318" watchObservedRunningTime="2025-11-23 11:19:34.883218248 +0000 UTC m=+1.430135827"
	Nov 23 11:19:34 newest-cni-058071 kubelet[1304]: I1123 11:19:34.883393    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-058071" podStartSLOduration=2.883387244 podStartE2EDuration="2.883387244s" podCreationTimestamp="2025-11-23 11:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:34.883206424 +0000 UTC m=+1.430123987" watchObservedRunningTime="2025-11-23 11:19:34.883387244 +0000 UTC m=+1.430304799"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.060582    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.061253    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: E1123 11:19:38.809673    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-nhmmf\" is forbidden: User \"system:node:newest-cni-058071\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-058071' and this object" podUID="3a4984b0-33ea-41b8-bcf0-371db0376a23" pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: E1123 11:19:38.809746    1304 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-058071\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-058071' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.818080    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-lib-modules\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.818131    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-xtables-lock\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.818151    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-cni-cfg\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.818169    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw6cb\" (UniqueName: \"kubernetes.io/projected/3a4984b0-33ea-41b8-bcf0-371db0376a23-kube-api-access-rw6cb\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.921002    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrlv\" (UniqueName: \"kubernetes.io/projected/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-kube-api-access-dsrlv\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.921076    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-kube-proxy\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.921105    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-xtables-lock\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:38 newest-cni-058071 kubelet[1304]: I1123 11:19:38.921121    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-lib-modules\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: E1123 11:19:40.030901    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: E1123 11:19:40.030952    1304 projected.go:196] Error preparing data for projected volume kube-api-access-rw6cb for pod kube-system/kindnet-nhmmf: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: E1123 11:19:40.031044    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a4984b0-33ea-41b8-bcf0-371db0376a23-kube-api-access-rw6cb podName:3a4984b0-33ea-41b8-bcf0-371db0376a23 nodeName:}" failed. No retries permitted until 2025-11-23 11:19:40.531019332 +0000 UTC m=+7.077936879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rw6cb" (UniqueName: "kubernetes.io/projected/3a4984b0-33ea-41b8-bcf0-371db0376a23-kube-api-access-rw6cb") pod "kindnet-nhmmf" (UID: "3a4984b0-33ea-41b8-bcf0-371db0376a23") : failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: E1123 11:19:40.152873    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: E1123 11:19:40.152927    1304 projected.go:196] Error preparing data for projected volume kube-api-access-dsrlv for pod kube-system/kube-proxy-k574z: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: E1123 11:19:40.153118    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-kube-api-access-dsrlv podName:5d8ab6d1-c0c9-4f98-a624-cee178c49a77 nodeName:}" failed. No retries permitted until 2025-11-23 11:19:40.652978205 +0000 UTC m=+7.199895760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dsrlv" (UniqueName: "kubernetes.io/projected/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-kube-api-access-dsrlv") pod "kube-proxy-k574z" (UID: "5d8ab6d1-c0c9-4f98-a624-cee178c49a77") : failed to sync configmap cache: timed out waiting for the condition
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: I1123 11:19:40.542651    1304 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:19:40 newest-cni-058071 kubelet[1304]: W1123 11:19:40.926591    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/crio-4570f9416bf440923f76bfb452bd4f038365c4120554713a242e694f7e1358ef WatchSource:0}: Error finding container 4570f9416bf440923f76bfb452bd4f038365c4120554713a242e694f7e1358ef: Status 404 returned error can't find the container with id 4570f9416bf440923f76bfb452bd4f038365c4120554713a242e694f7e1358ef
	Nov 23 11:19:41 newest-cni-058071 kubelet[1304]: I1123 11:19:41.323225    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nhmmf" podStartSLOduration=3.323203044 podStartE2EDuration="3.323203044s" podCreationTimestamp="2025-11-23 11:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:40.738001751 +0000 UTC m=+7.284919315" watchObservedRunningTime="2025-11-23 11:19:41.323203044 +0000 UTC m=+7.870120608"
	Nov 23 11:19:41 newest-cni-058071 kubelet[1304]: I1123 11:19:41.733872    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k574z" podStartSLOduration=3.733841387 podStartE2EDuration="3.733841387s" podCreationTimestamp="2025-11-23 11:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:41.733206681 +0000 UTC m=+8.280124244" watchObservedRunningTime="2025-11-23 11:19:41.733841387 +0000 UTC m=+8.280758950"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-058071 -n newest-cni-058071
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-058071 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-86c67 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner: exit status 1 (83.223214ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-86c67" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (333.336031ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:19:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-103096 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103096 describe deploy/metrics-server -n kube-system: exit status 1 (139.112488ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-103096 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-103096
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-103096:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0",
	        "Created": "2025-11-23T11:18:31.407055739Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 735746,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:18:31.465640532Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/hosts",
	        "LogPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0-json.log",
	        "Name": "/default-k8s-diff-port-103096",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-103096:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-103096",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0",
	                "LowerDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103096",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103096/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103096",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103096",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103096",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0db298d13ec23644d9a659fc72259e63768b472707c3aeb53073e1c5c962121c",
	            "SandboxKey": "/var/run/docker/netns/0db298d13ec2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-103096": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:37:fc:1d:41:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e03847072cf28dc18f7a1d9d48fec693250a4b2bc18a1175017d251775e454c9",
	                    "EndpointID": "4c0313b31ed76747d74328ff5c56229075d56450784f286b7f51c59b4fbbe85d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103096",
	                        "ea90e0e4e065"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103096 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-103096 logs -n 25: (2.191582387s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:15 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-258179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p no-preload-258179 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p newest-cni-058071 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-058071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:19:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:19:44.618325  742315 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:19:44.618459  742315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:44.618469  742315 out.go:374] Setting ErrFile to fd 2...
	I1123 11:19:44.618475  742315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:44.618726  742315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:19:44.619087  742315 out.go:368] Setting JSON to false
	I1123 11:19:44.619971  742315 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14534,"bootTime":1763882251,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:19:44.620038  742315 start.go:143] virtualization:  
	I1123 11:19:44.623243  742315 out.go:179] * [newest-cni-058071] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:19:44.627248  742315 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:19:44.627491  742315 notify.go:221] Checking for updates...
	I1123 11:19:44.633060  742315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:19:44.636027  742315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:44.638930  742315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:19:44.641896  742315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:19:44.644731  742315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:19:44.648089  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:44.648716  742315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:19:44.671629  742315 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:19:44.671751  742315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:44.738265  742315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:44.727634366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:44.738371  742315 docker.go:319] overlay module found
	I1123 11:19:44.743346  742315 out.go:179] * Using the docker driver based on existing profile
	I1123 11:19:44.746111  742315 start.go:309] selected driver: docker
	I1123 11:19:44.746128  742315 start.go:927] validating driver "docker" against &{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:44.746249  742315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:19:44.750357  742315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:44.807038  742315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:44.797465189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:44.807374  742315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:19:44.807404  742315 cni.go:84] Creating CNI manager for ""
	I1123 11:19:44.807461  742315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:44.807504  742315 start.go:353] cluster config:
	{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:44.812512  742315 out.go:179] * Starting "newest-cni-058071" primary control-plane node in "newest-cni-058071" cluster
	I1123 11:19:44.815290  742315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:19:44.818178  742315 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:19:44.820979  742315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:44.821031  742315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:19:44.821041  742315 cache.go:65] Caching tarball of preloaded images
	I1123 11:19:44.821068  742315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:19:44.821137  742315 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:19:44.821147  742315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:19:44.821259  742315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:44.846039  742315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:19:44.846062  742315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:19:44.846078  742315 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:19:44.846108  742315 start.go:360] acquireMachinesLock for newest-cni-058071: {Name:mkcc8b04939d321e7fa14f673dfa688f531ff5df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:19:44.846163  742315 start.go:364] duration metric: took 35.029µs to acquireMachinesLock for "newest-cni-058071"
	I1123 11:19:44.846188  742315 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:19:44.846201  742315 fix.go:54] fixHost starting: 
	I1123 11:19:44.846456  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:44.863432  742315 fix.go:112] recreateIfNeeded on newest-cni-058071: state=Stopped err=<nil>
	W1123 11:19:44.863463  742315 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:19:41.409289  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:43.908137  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:45.915466  735340 node_ready.go:49] node "default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:45.915497  735340 node_ready.go:38] duration metric: took 40.010059173s for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:19:45.915513  735340 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:19:45.915574  735340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:19:45.933170  735340 api_server.go:72] duration metric: took 42.004976922s to wait for apiserver process to appear ...
	I1123 11:19:45.933198  735340 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:19:45.933220  735340 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:19:45.962722  735340 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 11:19:45.965518  735340 api_server.go:141] control plane version: v1.34.1
	I1123 11:19:45.965548  735340 api_server.go:131] duration metric: took 32.341977ms to wait for apiserver health ...
	I1123 11:19:45.965557  735340 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:19:45.986799  735340 system_pods.go:59] 8 kube-system pods found
	I1123 11:19:45.986840  735340 system_pods.go:61] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending
	I1123 11:19:45.986864  735340 system_pods.go:61] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:45.986911  735340 system_pods.go:61] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:45.986932  735340 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:45.986937  735340 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:45.986941  735340 system_pods.go:61] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:45.986945  735340 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:45.986962  735340 system_pods.go:61] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:45.986982  735340 system_pods.go:74] duration metric: took 21.411513ms to wait for pod list to return data ...
	I1123 11:19:45.986997  735340 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:19:45.989965  735340 default_sa.go:45] found service account: "default"
	I1123 11:19:45.990037  735340 default_sa.go:55] duration metric: took 3.032498ms for default service account to be created ...
	I1123 11:19:45.990062  735340 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:19:45.997322  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:45.997456  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending
	I1123 11:19:45.997482  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:45.997506  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:45.997545  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:45.997571  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:45.997593  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:45.997632  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:45.997659  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:45.997719  735340 retry.go:31] will retry after 223.844429ms: missing components: kube-dns
	I1123 11:19:46.226266  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.226302  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:19:46.226310  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.226316  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.226339  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.226372  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.226383  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.226387  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.226393  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:46.226415  735340 retry.go:31] will retry after 269.174574ms: missing components: kube-dns
	I1123 11:19:46.503566  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.503648  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:19:46.503680  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.503702  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.503731  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.503763  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.503788  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.503810  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.503845  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:46.503874  735340 retry.go:31] will retry after 349.134365ms: missing components: kube-dns
	I1123 11:19:46.857167  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.857257  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Running
	I1123 11:19:46.857290  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.857313  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.857335  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.857356  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.857388  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.857443  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.857454  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Running
	I1123 11:19:46.857464  735340 system_pods.go:126] duration metric: took 867.382706ms to wait for k8s-apps to be running ...
	I1123 11:19:46.857471  735340 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:19:46.857565  735340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:19:46.871621  735340 system_svc.go:56] duration metric: took 14.138981ms WaitForService to wait for kubelet
	I1123 11:19:46.871693  735340 kubeadm.go:587] duration metric: took 42.94350422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:19:46.871718  735340 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:19:46.874817  735340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:19:46.874848  735340 node_conditions.go:123] node cpu capacity is 2
	I1123 11:19:46.874862  735340 node_conditions.go:105] duration metric: took 3.137698ms to run NodePressure ...
	I1123 11:19:46.874875  735340 start.go:242] waiting for startup goroutines ...
	I1123 11:19:46.874883  735340 start.go:247] waiting for cluster config update ...
	I1123 11:19:46.874900  735340 start.go:256] writing updated cluster config ...
	I1123 11:19:46.875232  735340 ssh_runner.go:195] Run: rm -f paused
	I1123 11:19:46.878961  735340 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:19:46.957386  735340 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.962693  735340 pod_ready.go:94] pod "coredns-66bc5c9577-jxjjg" is "Ready"
	I1123 11:19:46.962731  735340 pod_ready.go:86] duration metric: took 5.28005ms for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.965268  735340 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.969979  735340 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:46.970010  735340 pod_ready.go:86] duration metric: took 4.715712ms for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.972372  735340 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.976670  735340 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:46.976698  735340 pod_ready.go:86] duration metric: took 4.302763ms for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.979034  735340 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.283559  735340 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:47.283586  735340 pod_ready.go:86] duration metric: took 304.480419ms for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.482856  735340 pod_ready.go:83] waiting for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.883105  735340 pod_ready.go:94] pod "kube-proxy-kp7fv" is "Ready"
	I1123 11:19:47.883132  735340 pod_ready.go:86] duration metric: took 400.237422ms for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.083580  735340 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.482628  735340 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:48.482672  735340 pod_ready.go:86] duration metric: took 399.055275ms for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.482687  735340 pod_ready.go:40] duration metric: took 1.603691622s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:19:48.568932  735340 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:19:48.572293  735340 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103096" cluster and "default" namespace by default
	I1123 11:19:44.866695  742315 out.go:252] * Restarting existing docker container for "newest-cni-058071" ...
	I1123 11:19:44.866781  742315 cli_runner.go:164] Run: docker start newest-cni-058071
	I1123 11:19:45.269045  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:45.296713  742315 kic.go:430] container "newest-cni-058071" state is running.
	I1123 11:19:45.297507  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:45.323006  742315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:45.323390  742315 machine.go:94] provisionDockerMachine start ...
	I1123 11:19:45.323513  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:45.350795  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:45.351325  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:45.351340  742315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:19:45.353393  742315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:19:48.507891  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:48.507913  742315 ubuntu.go:182] provisioning hostname "newest-cni-058071"
	I1123 11:19:48.507976  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:48.534692  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:48.535018  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:48.535031  742315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-058071 && echo "newest-cni-058071" | sudo tee /etc/hostname
	I1123 11:19:48.752756  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:48.752833  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:48.803242  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:48.803544  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:48.803562  742315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-058071' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-058071/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-058071' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:19:48.973866  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:19:48.973934  742315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:19:48.973963  742315 ubuntu.go:190] setting up certificates
	I1123 11:19:48.973973  742315 provision.go:84] configureAuth start
	I1123 11:19:48.974067  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:48.991997  742315 provision.go:143] copyHostCerts
	I1123 11:19:48.992073  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:19:48.992100  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:19:48.992182  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:19:48.992279  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:19:48.992290  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:19:48.992317  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:19:48.992420  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:19:48.992430  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:19:48.992453  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:19:48.992503  742315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.newest-cni-058071 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-058071]
	I1123 11:19:49.168901  742315 provision.go:177] copyRemoteCerts
	I1123 11:19:49.169018  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:19:49.169113  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.219548  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:49.333289  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:19:49.353433  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:19:49.372314  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:19:49.392569  742315 provision.go:87] duration metric: took 418.573025ms to configureAuth
	I1123 11:19:49.392609  742315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:19:49.392854  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:49.392993  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.411998  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:49.412335  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:49.412356  742315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:19:49.762536  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:19:49.762562  742315 machine.go:97] duration metric: took 4.439158639s to provisionDockerMachine
	I1123 11:19:49.762575  742315 start.go:293] postStartSetup for "newest-cni-058071" (driver="docker")
	I1123 11:19:49.762587  742315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:19:49.762670  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:19:49.762719  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.780214  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:49.889878  742315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:19:49.893471  742315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:19:49.893550  742315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:19:49.893570  742315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:19:49.893624  742315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:19:49.893705  742315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:19:49.893808  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:19:49.901459  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:49.920044  742315 start.go:296] duration metric: took 157.452391ms for postStartSetup
	I1123 11:19:49.920169  742315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:19:49.920240  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.938475  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.043034  742315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:19:50.048308  742315 fix.go:56] duration metric: took 5.202099069s for fixHost
	I1123 11:19:50.048334  742315 start.go:83] releasing machines lock for "newest-cni-058071", held for 5.20215708s
	I1123 11:19:50.048453  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:50.066857  742315 ssh_runner.go:195] Run: cat /version.json
	I1123 11:19:50.066917  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:50.066926  742315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:19:50.067013  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:50.100221  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.101997  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.295444  742315 ssh_runner.go:195] Run: systemctl --version
	I1123 11:19:50.301801  742315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:19:50.338619  742315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:19:50.342949  742315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:19:50.343054  742315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:19:50.351186  742315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:19:50.351212  742315 start.go:496] detecting cgroup driver to use...
	I1123 11:19:50.351269  742315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:19:50.351347  742315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:19:50.367066  742315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:19:50.381479  742315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:19:50.381581  742315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:19:50.399390  742315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:19:50.413833  742315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:19:50.526594  742315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:19:50.650905  742315 docker.go:234] disabling docker service ...
	I1123 11:19:50.651029  742315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:19:50.668907  742315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:19:50.683792  742315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:19:50.813878  742315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:19:50.941111  742315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:19:50.954589  742315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:19:50.969124  742315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:19:50.969233  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.978239  742315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:19:50.978310  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.987886  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.997715  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.009217  742315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:19:51.019070  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.030345  742315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.040370  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.051079  742315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:19:51.059983  742315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:19:51.070139  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:51.242835  742315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:19:51.466880  742315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:19:51.466954  742315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:19:51.473586  742315 start.go:564] Will wait 60s for crictl version
	I1123 11:19:51.473743  742315 ssh_runner.go:195] Run: which crictl
	I1123 11:19:51.479330  742315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:19:51.509369  742315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:19:51.509577  742315 ssh_runner.go:195] Run: crio --version
	I1123 11:19:51.540482  742315 ssh_runner.go:195] Run: crio --version
	I1123 11:19:51.573187  742315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:19:51.576058  742315 cli_runner.go:164] Run: docker network inspect newest-cni-058071 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:19:51.596104  742315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:19:51.600564  742315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:51.614496  742315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 11:19:51.617613  742315 kubeadm.go:884] updating cluster {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:19:51.617764  742315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:51.617839  742315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:51.655314  742315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:51.655339  742315 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:19:51.655432  742315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:51.685147  742315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:51.685170  742315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:19:51.685178  742315 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:19:51.685285  742315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-058071 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:19:51.685375  742315 ssh_runner.go:195] Run: crio config
	I1123 11:19:51.743255  742315 cni.go:84] Creating CNI manager for ""
	I1123 11:19:51.743285  742315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:51.743310  742315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 11:19:51.743335  742315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-058071 NodeName:newest-cni-058071 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:19:51.743471  742315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-058071"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:19:51.743557  742315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:19:51.753883  742315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:19:51.754006  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:19:51.762325  742315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 11:19:51.775712  742315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:19:51.788529  742315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 11:19:51.804648  742315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:19:51.809303  742315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:51.821570  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:51.938837  742315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:51.957972  742315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071 for IP: 192.168.76.2
	I1123 11:19:51.958035  742315 certs.go:195] generating shared ca certs ...
	I1123 11:19:51.958066  742315 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:51.958226  742315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:19:51.958310  742315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:19:51.958343  742315 certs.go:257] generating profile certs ...
	I1123 11:19:51.958450  742315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.key
	I1123 11:19:51.958593  742315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key.cc862dfe
	I1123 11:19:51.958672  742315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key
	I1123 11:19:51.958808  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:19:51.958872  742315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:19:51.958899  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:19:51.958958  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:19:51.959016  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:19:51.959072  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:19:51.959151  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:51.959843  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:19:51.980033  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:19:52.000104  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:19:52.023963  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:19:52.047526  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:19:52.069834  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:19:52.095636  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:19:52.128764  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:19:52.158765  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:19:52.179578  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:19:52.200119  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:19:52.219939  742315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:19:52.233651  742315 ssh_runner.go:195] Run: openssl version
	I1123 11:19:52.239968  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:19:52.248699  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.252974  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.253097  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.296708  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:19:52.306614  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:19:52.314774  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.318587  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.318708  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.359535  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:19:52.367601  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:19:52.375829  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.379462  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.379598  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.424527  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:19:52.432595  742315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:19:52.436406  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:19:52.478133  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:19:52.519288  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:19:52.560663  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:19:52.611632  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:19:52.684174  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:19:52.766400  742315 kubeadm.go:401] StartCluster: {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:52.766503  742315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:19:52.766621  742315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:19:52.821235  742315 cri.go:89] found id: "760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d"
	I1123 11:19:52.821260  742315 cri.go:89] found id: "04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18"
	I1123 11:19:52.821266  742315 cri.go:89] found id: "0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6"
	I1123 11:19:52.821270  742315 cri.go:89] found id: "4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde"
	I1123 11:19:52.821278  742315 cri.go:89] found id: ""
	I1123 11:19:52.821361  742315 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:19:52.846270  742315 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:19:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:19:52.846386  742315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:19:52.863485  742315 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:19:52.863558  742315 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:19:52.863650  742315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:19:52.881820  742315 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:19:52.882496  742315 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-058071" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:52.882823  742315 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-058071" cluster setting kubeconfig missing "newest-cni-058071" context setting]
	I1123 11:19:52.883361  742315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.885199  742315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:19:52.898096  742315 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 11:19:52.898177  742315 kubeadm.go:602] duration metric: took 34.598927ms to restartPrimaryControlPlane
	I1123 11:19:52.898243  742315 kubeadm.go:403] duration metric: took 131.853098ms to StartCluster
	I1123 11:19:52.898279  742315 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.898368  742315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:52.899447  742315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.899741  742315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:19:52.900274  742315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:19:52.900357  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:52.900367  742315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-058071"
	I1123 11:19:52.900382  742315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-058071"
	W1123 11:19:52.900388  742315 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:19:52.900413  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.900418  742315 addons.go:70] Setting dashboard=true in profile "newest-cni-058071"
	I1123 11:19:52.900430  742315 addons.go:239] Setting addon dashboard=true in "newest-cni-058071"
	W1123 11:19:52.900436  742315 addons.go:248] addon dashboard should already be in state true
	I1123 11:19:52.900455  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.900890  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.901140  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.901374  742315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-058071"
	I1123 11:19:52.901400  742315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-058071"
	I1123 11:19:52.902124  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.905999  742315 out.go:179] * Verifying Kubernetes components...
	I1123 11:19:52.909155  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:52.942466  742315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-058071"
	W1123 11:19:52.942488  742315 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:19:52.942512  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.942959  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.980854  742315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:19:52.983078  742315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:19:52.986214  742315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:19:52.986266  742315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:52.986282  742315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:19:52.986350  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:52.990630  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:19:52.990653  742315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:19:52.990727  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:52.995804  742315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:52.995839  742315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:19:52.995980  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:53.048306  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.058976  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.071262  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.277894  742315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:53.304530  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:53.318473  742315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:19:53.318551  742315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:19:53.351895  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:53.374720  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:19:53.374745  742315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:19:53.482645  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:19:53.482670  742315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:19:53.523581  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:19:53.523606  742315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:19:53.544973  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:19:53.544999  742315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:19:53.568172  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:19:53.568197  742315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:19:53.591500  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:19:53.591524  742315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:19:53.610849  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:19:53.610873  742315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:19:53.634614  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:19:53.634640  742315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:19:53.659063  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:19:53.659089  742315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:19:53.682748  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 23 11:19:46 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:46.338754706Z" level=info msg="Created container c020fdb9b2ed7595df63c2915eace7028201993c442173190feaca61e8cc4626: kube-system/coredns-66bc5c9577-jxjjg/coredns" id=1184d3ed-60c2-47f9-8a15-eb984ed9ac23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:46 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:46.339902712Z" level=info msg="Starting container: c020fdb9b2ed7595df63c2915eace7028201993c442173190feaca61e8cc4626" id=000c5f4f-c5b9-43d6-a2e3-e2b91f29e0c6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:19:46 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:46.345736252Z" level=info msg="Started container" PID=1744 containerID=c020fdb9b2ed7595df63c2915eace7028201993c442173190feaca61e8cc4626 description=kube-system/coredns-66bc5c9577-jxjjg/coredns id=000c5f4f-c5b9-43d6-a2e3-e2b91f29e0c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca7dfcf8d93abb3030e38fa4e15430b793da833b9f18218dc7d7bb94f8a4247d
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.174268494Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0a1940bf-3f9d-4b10-b901-320613db2046 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.174366999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.182360343Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d1f8630fec07c808fc2e8bf7e2be8e478943f17d02591ed8f884df2f6d3c43a7 UID:132528ae-9172-48d0-89be-41e905f4ee49 NetNS:/var/run/netns/94ce7c47-8a5e-4606-bff1-f1137b2c6ae6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078b68}] Aliases:map[]}"
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.182410682Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.196010844Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d1f8630fec07c808fc2e8bf7e2be8e478943f17d02591ed8f884df2f6d3c43a7 UID:132528ae-9172-48d0-89be-41e905f4ee49 NetNS:/var/run/netns/94ce7c47-8a5e-4606-bff1-f1137b2c6ae6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078b68}] Aliases:map[]}"
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.196170733Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.206242566Z" level=info msg="Ran pod sandbox d1f8630fec07c808fc2e8bf7e2be8e478943f17d02591ed8f884df2f6d3c43a7 with infra container: default/busybox/POD" id=0a1940bf-3f9d-4b10-b901-320613db2046 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.21090473Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b59c2c93-7bf1-4825-817a-f0ce7beddf5a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.211124789Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b59c2c93-7bf1-4825-817a-f0ce7beddf5a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.211181857Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b59c2c93-7bf1-4825-817a-f0ce7beddf5a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.212978395Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9ca9ad72-b5dc-45e1-9d0e-34cbf443eaff name=/runtime.v1.ImageService/PullImage
	Nov 23 11:19:49 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:49.22054809Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.414709152Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9ca9ad72-b5dc-45e1-9d0e-34cbf443eaff name=/runtime.v1.ImageService/PullImage
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.415520765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=154041eb-55b3-45f9-89c1-0de687eedd33 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.419873408Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=47076b31-42f8-43a2-928a-342ced9363e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.425825867Z" level=info msg="Creating container: default/busybox/busybox" id=1e9e3330-e760-4561-aff0-e2609fc1a8bd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.425974834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.431099549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.431734378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.451245938Z" level=info msg="Created container 22a07a6423ab2b839004f46f41b6b29418843dd4bdf0b56f23de73c145c002cf: default/busybox/busybox" id=1e9e3330-e760-4561-aff0-e2609fc1a8bd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.454423834Z" level=info msg="Starting container: 22a07a6423ab2b839004f46f41b6b29418843dd4bdf0b56f23de73c145c002cf" id=804227cd-a37c-4cf4-8e40-fce392c3afc7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:19:51 default-k8s-diff-port-103096 crio[836]: time="2025-11-23T11:19:51.458933634Z" level=info msg="Started container" PID=1797 containerID=22a07a6423ab2b839004f46f41b6b29418843dd4bdf0b56f23de73c145c002cf description=default/busybox/busybox id=804227cd-a37c-4cf4-8e40-fce392c3afc7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1f8630fec07c808fc2e8bf7e2be8e478943f17d02591ed8f884df2f6d3c43a7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	22a07a6423ab2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   d1f8630fec07c       busybox                                                default
	c020fdb9b2ed7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   ca7dfcf8d93ab       coredns-66bc5c9577-jxjjg                               kube-system
	47b9b80dc1002       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   54d72fc493d5f       storage-provisioner                                    kube-system
	d06e947a9fe65       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   ad17502478050       kube-proxy-kp7fv                                       kube-system
	7435d911a2299       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   3a1946537e9de       kindnet-flj5s                                          kube-system
	a8e9b62bbabdd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   216babeac3da9       kube-apiserver-default-k8s-diff-port-103096            kube-system
	1eb8a703460a0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   01255b6f2a33a       kube-scheduler-default-k8s-diff-port-103096            kube-system
	e29f2931e3bab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   4d8762e74b7ac       kube-controller-manager-default-k8s-diff-port-103096   kube-system
	46ab192365da1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   ecf629905628c       etcd-default-k8s-diff-port-103096                      kube-system
	
	
	==> coredns [c020fdb9b2ed7595df63c2915eace7028201993c442173190feaca61e8cc4626] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45460 - 64711 "HINFO IN 3067205350115307195.2961410597226963163. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040276011s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103096
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-103096
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-103096
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_18_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:18:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103096
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:19:59 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:19:59 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:19:59 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:19:59 +0000   Sun, 23 Nov 2025 11:19:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-103096
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                89e61585-704f-4a7a-8b1e-bc99234af9b9
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-jxjjg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-103096                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-flj5s                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-103096             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103096    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-kp7fv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-103096             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-103096 event: Registered Node default-k8s-diff-port-103096 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-103096 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	[Nov23 11:19] overlayfs: idmapped layers are currently not supported
	[ +26.182636] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [46ab192365da127815bbbbcc56254a0f3994824a01e44615b502ab397ce076ce] <==
	{"level":"warn","ts":"2025-11-23T11:18:53.431712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.464517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.541313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.569665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.593028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.620112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.638148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.656716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.670562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.697321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.741491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.767186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.777469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.810564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.834264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.864256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.908391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.946420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.962097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:53.982158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:54.003073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:54.034259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:54.064182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:54.095255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:18:54.179606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46954","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:19:59 up  4:02,  0 user,  load average: 4.82, 3.79, 3.09
	Linux default-k8s-diff-port-103096 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7435d911a22990803dca09e8c11f16dbae7563d584fc8920ea38454b56bbde8c] <==
	I1123 11:19:05.467746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:19:05.467973       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:19:05.468094       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:19:05.468106       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:19:05.468116       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:19:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:19:05.684736       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:19:05.685528       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:19:05.686049       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:19:05.686475       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:19:35.683627       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:19:35.685671       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 11:19:35.686779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:19:35.686884       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 11:19:37.186231       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:19:37.186264       1 metrics.go:72] Registering metrics
	I1123 11:19:37.186334       1 controller.go:711] "Syncing nftables rules"
	I1123 11:19:45.689592       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:19:45.689649       1 main.go:301] handling current node
	I1123 11:19:55.686292       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:19:55.686466       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a8e9b62bbabddfbfd2aa123093230f3fc077b5185399939464c8786a8c5eb70d] <==
	I1123 11:18:55.255843       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:18:55.258523       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:18:55.258539       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 11:18:55.307981       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:18:55.309290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 11:18:55.324041       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 11:18:55.533162       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:18:55.942690       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 11:18:55.949756       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 11:18:55.949782       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:18:57.120086       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:18:57.183698       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:18:57.302366       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 11:18:57.312062       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 11:18:57.313327       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:18:57.319119       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:18:58.163241       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:18:58.360697       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:18:58.414070       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 11:18:58.454130       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 11:19:03.924533       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 11:19:04.263292       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:19:04.294072       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:19:04.327226       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 11:19:57.175501       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:33642: use of closed network connection
	
	
	==> kube-controller-manager [e29f2931e3bab9daa1b9e0e63adc891070e3aa4086dee71a5cf8c01eda754c22] <==
	I1123 11:19:03.206867       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 11:19:03.207264       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:19:03.207452       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:19:03.209303       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 11:19:03.210760       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:19:03.211409       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:19:03.211515       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:19:03.211600       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-103096"
	I1123 11:19:03.214833       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 11:19:03.214871       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:19:03.214986       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:19:03.215020       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 11:19:03.216312       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 11:19:03.216330       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:19:03.220527       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 11:19:03.220799       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:19:03.220861       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 11:19:03.221126       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-103096" podCIDRs=["10.244.0.0/24"]
	I1123 11:19:03.228404       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:19:03.240367       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:19:03.249769       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:19:03.303966       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:19:03.303994       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:19:03.304004       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:19:48.225290       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d06e947a9fe6565d61d1e7f7cd243d24f23d0c30c8bb81acee406d3c159d24d7] <==
	I1123 11:19:05.883615       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:19:06.093112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:19:06.193480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:19:06.193521       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:19:06.193591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:19:06.216137       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:19:06.216290       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:19:06.227209       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:19:06.227610       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:19:06.227671       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:19:06.228940       1 config.go:200] "Starting service config controller"
	I1123 11:19:06.229002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:19:06.229056       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:19:06.229084       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:19:06.229121       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:19:06.229148       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:19:06.229879       1 config.go:309] "Starting node config controller"
	I1123 11:19:06.232266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:19:06.232324       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:19:06.329124       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:19:06.329227       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:19:06.329251       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1eb8a703460a0a35417b1e23e0c13f2df789fc43840d86b1107cc782f8b12f91] <==
	E1123 11:18:55.214109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:18:55.214249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:18:55.214399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:18:55.215867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:18:55.215958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 11:18:55.216147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:18:55.216204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:18:55.216266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:18:55.216311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:18:55.216344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:18:55.216640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:18:55.224635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:18:56.086197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:18:56.121682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:18:56.190355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:18:56.219299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 11:18:56.299147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:18:56.303202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:18:56.344532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:18:56.443810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:18:56.443832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:18:56.481557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:18:56.498452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:18:56.586782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1123 11:18:58.150741       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:19:03 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:03.317202    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: E1123 11:19:04.087664    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-flj5s\" is forbidden: User \"system:node:default-k8s-diff-port-103096\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-103096' and this object" podUID="60f06024-23b3-40d8-8fd0-b02eb7d12f6c" pod="kube-system/kindnet-flj5s"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: E1123 11:19:04.087748    1310 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-103096\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-103096' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.188726    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60f06024-23b3-40d8-8fd0-b02eb7d12f6c-xtables-lock\") pod \"kindnet-flj5s\" (UID: \"60f06024-23b3-40d8-8fd0-b02eb7d12f6c\") " pod="kube-system/kindnet-flj5s"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.188774    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/60f06024-23b3-40d8-8fd0-b02eb7d12f6c-cni-cfg\") pod \"kindnet-flj5s\" (UID: \"60f06024-23b3-40d8-8fd0-b02eb7d12f6c\") " pod="kube-system/kindnet-flj5s"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.188796    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60f06024-23b3-40d8-8fd0-b02eb7d12f6c-lib-modules\") pod \"kindnet-flj5s\" (UID: \"60f06024-23b3-40d8-8fd0-b02eb7d12f6c\") " pod="kube-system/kindnet-flj5s"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.188819    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwk2\" (UniqueName: \"kubernetes.io/projected/60f06024-23b3-40d8-8fd0-b02eb7d12f6c-kube-api-access-gzwk2\") pod \"kindnet-flj5s\" (UID: \"60f06024-23b3-40d8-8fd0-b02eb7d12f6c\") " pod="kube-system/kindnet-flj5s"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.290421    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa7fabe6-6495-4392-a507-fb069447788d-kube-proxy\") pod \"kube-proxy-kp7fv\" (UID: \"fa7fabe6-6495-4392-a507-fb069447788d\") " pod="kube-system/kube-proxy-kp7fv"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.290500    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa7fabe6-6495-4392-a507-fb069447788d-xtables-lock\") pod \"kube-proxy-kp7fv\" (UID: \"fa7fabe6-6495-4392-a507-fb069447788d\") " pod="kube-system/kube-proxy-kp7fv"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.290522    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa7fabe6-6495-4392-a507-fb069447788d-lib-modules\") pod \"kube-proxy-kp7fv\" (UID: \"fa7fabe6-6495-4392-a507-fb069447788d\") " pod="kube-system/kube-proxy-kp7fv"
	Nov 23 11:19:04 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:04.290565    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksc8t\" (UniqueName: \"kubernetes.io/projected/fa7fabe6-6495-4392-a507-fb069447788d-kube-api-access-ksc8t\") pod \"kube-proxy-kp7fv\" (UID: \"fa7fabe6-6495-4392-a507-fb069447788d\") " pod="kube-system/kube-proxy-kp7fv"
	Nov 23 11:19:05 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:05.125387    1310 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:19:05 default-k8s-diff-port-103096 kubelet[1310]: W1123 11:19:05.271207    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/crio-3a1946537e9de1cc332276f54ef2bf8998d419f27b0ceaa3b77317ebb18311b6 WatchSource:0}: Error finding container 3a1946537e9de1cc332276f54ef2bf8998d419f27b0ceaa3b77317ebb18311b6: Status 404 returned error can't find the container with id 3a1946537e9de1cc332276f54ef2bf8998d419f27b0ceaa3b77317ebb18311b6
	Nov 23 11:19:05 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:05.763350    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-flj5s" podStartSLOduration=2.763331032 podStartE2EDuration="2.763331032s" podCreationTimestamp="2025-11-23 11:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:05.763131166 +0000 UTC m=+7.587556538" watchObservedRunningTime="2025-11-23 11:19:05.763331032 +0000 UTC m=+7.587756395"
	Nov 23 11:19:05 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:05.763472    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kp7fv" podStartSLOduration=2.763465763 podStartE2EDuration="2.763465763s" podCreationTimestamp="2025-11-23 11:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:05.717950783 +0000 UTC m=+7.542376146" watchObservedRunningTime="2025-11-23 11:19:05.763465763 +0000 UTC m=+7.587891167"
	Nov 23 11:19:45 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:45.872419    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 11:19:46 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:46.093234    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1be632ff-229a-4a85-af86-6e0d2f5d9a75-tmp\") pod \"storage-provisioner\" (UID: \"1be632ff-229a-4a85-af86-6e0d2f5d9a75\") " pod="kube-system/storage-provisioner"
	Nov 23 11:19:46 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:46.093284    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8b7\" (UniqueName: \"kubernetes.io/projected/1be632ff-229a-4a85-af86-6e0d2f5d9a75-kube-api-access-fv8b7\") pod \"storage-provisioner\" (UID: \"1be632ff-229a-4a85-af86-6e0d2f5d9a75\") " pod="kube-system/storage-provisioner"
	Nov 23 11:19:46 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:46.093310    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ace9508d-52f1-425a-9e84-2a8defd07ae8-config-volume\") pod \"coredns-66bc5c9577-jxjjg\" (UID: \"ace9508d-52f1-425a-9e84-2a8defd07ae8\") " pod="kube-system/coredns-66bc5c9577-jxjjg"
	Nov 23 11:19:46 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:46.093331    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfp9\" (UniqueName: \"kubernetes.io/projected/ace9508d-52f1-425a-9e84-2a8defd07ae8-kube-api-access-fzfp9\") pod \"coredns-66bc5c9577-jxjjg\" (UID: \"ace9508d-52f1-425a-9e84-2a8defd07ae8\") " pod="kube-system/coredns-66bc5c9577-jxjjg"
	Nov 23 11:19:46 default-k8s-diff-port-103096 kubelet[1310]: W1123 11:19:46.292827    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/crio-ca7dfcf8d93abb3030e38fa4e15430b793da833b9f18218dc7d7bb94f8a4247d WatchSource:0}: Error finding container ca7dfcf8d93abb3030e38fa4e15430b793da833b9f18218dc7d7bb94f8a4247d: Status 404 returned error can't find the container with id ca7dfcf8d93abb3030e38fa4e15430b793da833b9f18218dc7d7bb94f8a4247d
	Nov 23 11:19:46 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:46.818211    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jxjjg" podStartSLOduration=42.818190576 podStartE2EDuration="42.818190576s" podCreationTimestamp="2025-11-23 11:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:46.802298971 +0000 UTC m=+48.626724343" watchObservedRunningTime="2025-11-23 11:19:46.818190576 +0000 UTC m=+48.642615939"
	Nov 23 11:19:46 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:46.837277    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.837259092 podStartE2EDuration="41.837259092s" podCreationTimestamp="2025-11-23 11:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:19:46.81897722 +0000 UTC m=+48.643402591" watchObservedRunningTime="2025-11-23 11:19:46.837259092 +0000 UTC m=+48.661684447"
	Nov 23 11:19:49 default-k8s-diff-port-103096 kubelet[1310]: I1123 11:19:49.025442    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqrz\" (UniqueName: \"kubernetes.io/projected/132528ae-9172-48d0-89be-41e905f4ee49-kube-api-access-hvqrz\") pod \"busybox\" (UID: \"132528ae-9172-48d0-89be-41e905f4ee49\") " pod="default/busybox"
	Nov 23 11:19:49 default-k8s-diff-port-103096 kubelet[1310]: W1123 11:19:49.203427    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/crio-d1f8630fec07c808fc2e8bf7e2be8e478943f17d02591ed8f884df2f6d3c43a7 WatchSource:0}: Error finding container d1f8630fec07c808fc2e8bf7e2be8e478943f17d02591ed8f884df2f6d3c43a7: Status 404 returned error can't find the container with id d1f8630fec07c808fc2e8bf7e2be8e478943f17d02591ed8f884df2f6d3c43a7
	
	
	==> storage-provisioner [47b9b80dc1002a4c0c71fa1440b028c75aac5c37c498d7054938b8d4880148e0] <==
	I1123 11:19:46.308966       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:19:46.327397       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:19:46.327553       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:19:46.331129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:46.342959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:19:46.343259       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:19:46.343480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103096_0dee574f-b23e-4882-8f34-bbd58633752a!
	I1123 11:19:46.344458       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e8436e2-f872-447d-b72c-3f2b67de6c08", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-103096_0dee574f-b23e-4882-8f34-bbd58633752a became leader
	W1123 11:19:46.365270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:46.368990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:19:46.444551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103096_0dee574f-b23e-4882-8f34-bbd58633752a!
	W1123 11:19:48.373515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:48.380617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:50.384442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:50.390871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:52.394588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:52.403093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:54.406152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:54.412681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:56.416901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:56.425665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:58.429085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:19:58.438366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-103096 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-058071 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-058071 --alsologtostderr -v=1: exit status 80 (2.319924893s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-058071 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:20:02.347727  744705 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:20:02.347931  744705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:20:02.347957  744705 out.go:374] Setting ErrFile to fd 2...
	I1123 11:20:02.347976  744705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:20:02.348257  744705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:20:02.348548  744705 out.go:368] Setting JSON to false
	I1123 11:20:02.348602  744705 mustload.go:66] Loading cluster: newest-cni-058071
	I1123 11:20:02.349066  744705 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:02.349625  744705 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:20:02.366761  744705 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:20:02.367096  744705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:20:02.435354  744705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 11:20:02.416260531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:20:02.436015  744705 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-058071 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 11:20:02.439569  744705 out.go:179] * Pausing node newest-cni-058071 ... 
	I1123 11:20:02.442515  744705 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:20:02.442858  744705 ssh_runner.go:195] Run: systemctl --version
	I1123 11:20:02.442913  744705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:20:02.460614  744705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:20:02.568105  744705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:20:02.582731  744705 pause.go:52] kubelet running: true
	I1123 11:20:02.582813  744705 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:20:02.789347  744705 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:20:02.789488  744705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:20:02.953448  744705 cri.go:89] found id: "07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c"
	I1123 11:20:02.953479  744705 cri.go:89] found id: "5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee"
	I1123 11:20:02.953485  744705 cri.go:89] found id: "760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d"
	I1123 11:20:02.953490  744705 cri.go:89] found id: "04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18"
	I1123 11:20:02.953494  744705 cri.go:89] found id: "0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6"
	I1123 11:20:02.953498  744705 cri.go:89] found id: "4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde"
	I1123 11:20:02.953502  744705 cri.go:89] found id: ""
	I1123 11:20:02.953563  744705 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:20:03.002583  744705 retry.go:31] will retry after 199.156547ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:20:02Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:20:03.201931  744705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:20:03.229176  744705 pause.go:52] kubelet running: false
	I1123 11:20:03.229259  744705 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:20:03.401931  744705 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:20:03.402069  744705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:20:03.483751  744705 cri.go:89] found id: "07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c"
	I1123 11:20:03.483777  744705 cri.go:89] found id: "5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee"
	I1123 11:20:03.483782  744705 cri.go:89] found id: "760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d"
	I1123 11:20:03.483800  744705 cri.go:89] found id: "04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18"
	I1123 11:20:03.483803  744705 cri.go:89] found id: "0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6"
	I1123 11:20:03.483806  744705 cri.go:89] found id: "4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde"
	I1123 11:20:03.483809  744705 cri.go:89] found id: ""
	I1123 11:20:03.483861  744705 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:20:03.496944  744705 retry.go:31] will retry after 325.928435ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:20:03Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:20:03.823219  744705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:20:03.836311  744705 pause.go:52] kubelet running: false
	I1123 11:20:03.836374  744705 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:20:03.977422  744705 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:20:03.977498  744705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:20:04.056157  744705 cri.go:89] found id: "07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c"
	I1123 11:20:04.056180  744705 cri.go:89] found id: "5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee"
	I1123 11:20:04.056184  744705 cri.go:89] found id: "760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d"
	I1123 11:20:04.056193  744705 cri.go:89] found id: "04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18"
	I1123 11:20:04.056196  744705 cri.go:89] found id: "0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6"
	I1123 11:20:04.056204  744705 cri.go:89] found id: "4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde"
	I1123 11:20:04.056207  744705 cri.go:89] found id: ""
	I1123 11:20:04.056258  744705 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:20:04.067199  744705 retry.go:31] will retry after 288.131583ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:20:04Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:20:04.355645  744705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:20:04.370091  744705 pause.go:52] kubelet running: false
	I1123 11:20:04.370191  744705 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:20:04.508571  744705 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:20:04.508651  744705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:20:04.582100  744705 cri.go:89] found id: "07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c"
	I1123 11:20:04.582121  744705 cri.go:89] found id: "5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee"
	I1123 11:20:04.582126  744705 cri.go:89] found id: "760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d"
	I1123 11:20:04.582130  744705 cri.go:89] found id: "04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18"
	I1123 11:20:04.582133  744705 cri.go:89] found id: "0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6"
	I1123 11:20:04.582137  744705 cri.go:89] found id: "4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde"
	I1123 11:20:04.582140  744705 cri.go:89] found id: ""
	I1123 11:20:04.582193  744705 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:20:04.597194  744705 out.go:203] 
	W1123 11:20:04.600127  744705 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 11:20:04.600151  744705 out.go:285] * 
	* 
	W1123 11:20:04.608137  744705 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 11:20:04.611079  744705 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-058071 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-058071
helpers_test.go:243: (dbg) docker inspect newest-cni-058071:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559",
	        "Created": "2025-11-23T11:19:09.249053007Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:19:44.898360895Z",
	            "FinishedAt": "2025-11-23T11:19:43.783195868Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/hostname",
	        "HostsPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/hosts",
	        "LogPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559-json.log",
	        "Name": "/newest-cni-058071",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-058071:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-058071",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559",
	                "LowerDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-058071",
	                "Source": "/var/lib/docker/volumes/newest-cni-058071/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-058071",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-058071",
	                "name.minikube.sigs.k8s.io": "newest-cni-058071",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cef97981e41c9c065972f9da778d46d1f0dad13645f8bdf5cf9e4cfacbae35be",
	            "SandboxKey": "/var/run/docker/netns/cef97981e41c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-058071": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:83:9f:82:b3:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2ad1d74afe18771af1930500adbc0606f203b00728de9cd7c808850d196bbca",
	                    "EndpointID": "76e41c0ce156f5c877caedc39e1acad3dcf7c4cd0fef409771342ce9b26ce59d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-058071",
	                        "80b941940765"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071: exit status 2 (341.604824ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-058071 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-058071 logs -n 25: (1.097585593s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p newest-cni-058071 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-058071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103096 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │                     │
	│ image   │ newest-cni-058071 image list --format=json                                                                                                                                                                                                    │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ pause   │ -p newest-cni-058071 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:19:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:19:44.618325  742315 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:19:44.618459  742315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:44.618469  742315 out.go:374] Setting ErrFile to fd 2...
	I1123 11:19:44.618475  742315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:44.618726  742315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:19:44.619087  742315 out.go:368] Setting JSON to false
	I1123 11:19:44.619971  742315 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14534,"bootTime":1763882251,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:19:44.620038  742315 start.go:143] virtualization:  
	I1123 11:19:44.623243  742315 out.go:179] * [newest-cni-058071] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:19:44.627248  742315 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:19:44.627491  742315 notify.go:221] Checking for updates...
	I1123 11:19:44.633060  742315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:19:44.636027  742315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:44.638930  742315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:19:44.641896  742315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:19:44.644731  742315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:19:44.648089  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:44.648716  742315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:19:44.671629  742315 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:19:44.671751  742315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:44.738265  742315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:44.727634366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:44.738371  742315 docker.go:319] overlay module found
	I1123 11:19:44.743346  742315 out.go:179] * Using the docker driver based on existing profile
	I1123 11:19:44.746111  742315 start.go:309] selected driver: docker
	I1123 11:19:44.746128  742315 start.go:927] validating driver "docker" against &{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:44.746249  742315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:19:44.750357  742315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:44.807038  742315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:44.797465189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:44.807374  742315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:19:44.807404  742315 cni.go:84] Creating CNI manager for ""
	I1123 11:19:44.807461  742315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:44.807504  742315 start.go:353] cluster config:
	{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:44.812512  742315 out.go:179] * Starting "newest-cni-058071" primary control-plane node in "newest-cni-058071" cluster
	I1123 11:19:44.815290  742315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:19:44.818178  742315 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:19:44.820979  742315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:44.821031  742315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:19:44.821041  742315 cache.go:65] Caching tarball of preloaded images
	I1123 11:19:44.821068  742315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:19:44.821137  742315 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:19:44.821147  742315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:19:44.821259  742315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:44.846039  742315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:19:44.846062  742315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:19:44.846078  742315 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:19:44.846108  742315 start.go:360] acquireMachinesLock for newest-cni-058071: {Name:mkcc8b04939d321e7fa14f673dfa688f531ff5df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:19:44.846163  742315 start.go:364] duration metric: took 35.029µs to acquireMachinesLock for "newest-cni-058071"
	I1123 11:19:44.846188  742315 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:19:44.846201  742315 fix.go:54] fixHost starting: 
	I1123 11:19:44.846456  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:44.863432  742315 fix.go:112] recreateIfNeeded on newest-cni-058071: state=Stopped err=<nil>
	W1123 11:19:44.863463  742315 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:19:41.409289  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:43.908137  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:45.915466  735340 node_ready.go:49] node "default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:45.915497  735340 node_ready.go:38] duration metric: took 40.010059173s for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:19:45.915513  735340 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:19:45.915574  735340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:19:45.933170  735340 api_server.go:72] duration metric: took 42.004976922s to wait for apiserver process to appear ...
	I1123 11:19:45.933198  735340 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:19:45.933220  735340 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:19:45.962722  735340 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 11:19:45.965518  735340 api_server.go:141] control plane version: v1.34.1
	I1123 11:19:45.965548  735340 api_server.go:131] duration metric: took 32.341977ms to wait for apiserver health ...
	I1123 11:19:45.965557  735340 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:19:45.986799  735340 system_pods.go:59] 8 kube-system pods found
	I1123 11:19:45.986840  735340 system_pods.go:61] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending
	I1123 11:19:45.986864  735340 system_pods.go:61] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:45.986911  735340 system_pods.go:61] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:45.986932  735340 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:45.986937  735340 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:45.986941  735340 system_pods.go:61] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:45.986945  735340 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:45.986962  735340 system_pods.go:61] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:45.986982  735340 system_pods.go:74] duration metric: took 21.411513ms to wait for pod list to return data ...
	I1123 11:19:45.986997  735340 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:19:45.989965  735340 default_sa.go:45] found service account: "default"
	I1123 11:19:45.990037  735340 default_sa.go:55] duration metric: took 3.032498ms for default service account to be created ...
	I1123 11:19:45.990062  735340 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:19:45.997322  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:45.997456  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending
	I1123 11:19:45.997482  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:45.997506  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:45.997545  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:45.997571  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:45.997593  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:45.997632  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:45.997659  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:45.997719  735340 retry.go:31] will retry after 223.844429ms: missing components: kube-dns
	I1123 11:19:46.226266  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.226302  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:19:46.226310  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.226316  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.226339  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.226372  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.226383  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.226387  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.226393  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:46.226415  735340 retry.go:31] will retry after 269.174574ms: missing components: kube-dns
	I1123 11:19:46.503566  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.503648  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:19:46.503680  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.503702  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.503731  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.503763  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.503788  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.503810  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.503845  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:46.503874  735340 retry.go:31] will retry after 349.134365ms: missing components: kube-dns
	I1123 11:19:46.857167  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.857257  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Running
	I1123 11:19:46.857290  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.857313  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.857335  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.857356  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.857388  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.857443  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.857454  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Running
	I1123 11:19:46.857464  735340 system_pods.go:126] duration metric: took 867.382706ms to wait for k8s-apps to be running ...
	I1123 11:19:46.857471  735340 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:19:46.857565  735340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:19:46.871621  735340 system_svc.go:56] duration metric: took 14.138981ms WaitForService to wait for kubelet
	I1123 11:19:46.871693  735340 kubeadm.go:587] duration metric: took 42.94350422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:19:46.871718  735340 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:19:46.874817  735340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:19:46.874848  735340 node_conditions.go:123] node cpu capacity is 2
	I1123 11:19:46.874862  735340 node_conditions.go:105] duration metric: took 3.137698ms to run NodePressure ...
	I1123 11:19:46.874875  735340 start.go:242] waiting for startup goroutines ...
	I1123 11:19:46.874883  735340 start.go:247] waiting for cluster config update ...
	I1123 11:19:46.874900  735340 start.go:256] writing updated cluster config ...
	I1123 11:19:46.875232  735340 ssh_runner.go:195] Run: rm -f paused
	I1123 11:19:46.878961  735340 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:19:46.957386  735340 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.962693  735340 pod_ready.go:94] pod "coredns-66bc5c9577-jxjjg" is "Ready"
	I1123 11:19:46.962731  735340 pod_ready.go:86] duration metric: took 5.28005ms for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.965268  735340 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.969979  735340 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:46.970010  735340 pod_ready.go:86] duration metric: took 4.715712ms for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.972372  735340 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.976670  735340 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:46.976698  735340 pod_ready.go:86] duration metric: took 4.302763ms for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.979034  735340 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.283559  735340 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:47.283586  735340 pod_ready.go:86] duration metric: took 304.480419ms for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.482856  735340 pod_ready.go:83] waiting for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.883105  735340 pod_ready.go:94] pod "kube-proxy-kp7fv" is "Ready"
	I1123 11:19:47.883132  735340 pod_ready.go:86] duration metric: took 400.237422ms for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.083580  735340 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.482628  735340 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:48.482672  735340 pod_ready.go:86] duration metric: took 399.055275ms for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.482687  735340 pod_ready.go:40] duration metric: took 1.603691622s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:19:48.568932  735340 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:19:48.572293  735340 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103096" cluster and "default" namespace by default
	I1123 11:19:44.866695  742315 out.go:252] * Restarting existing docker container for "newest-cni-058071" ...
	I1123 11:19:44.866781  742315 cli_runner.go:164] Run: docker start newest-cni-058071
	I1123 11:19:45.269045  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:45.296713  742315 kic.go:430] container "newest-cni-058071" state is running.
	I1123 11:19:45.297507  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:45.323006  742315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:45.323390  742315 machine.go:94] provisionDockerMachine start ...
	I1123 11:19:45.323513  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:45.350795  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:45.351325  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:45.351340  742315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:19:45.353393  742315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:19:48.507891  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:48.507913  742315 ubuntu.go:182] provisioning hostname "newest-cni-058071"
	I1123 11:19:48.507976  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:48.534692  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:48.535018  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:48.535031  742315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-058071 && echo "newest-cni-058071" | sudo tee /etc/hostname
	I1123 11:19:48.752756  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:48.752833  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:48.803242  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:48.803544  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:48.803562  742315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-058071' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-058071/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-058071' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:19:48.973866  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:19:48.973934  742315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:19:48.973963  742315 ubuntu.go:190] setting up certificates
	I1123 11:19:48.973973  742315 provision.go:84] configureAuth start
	I1123 11:19:48.974067  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:48.991997  742315 provision.go:143] copyHostCerts
	I1123 11:19:48.992073  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:19:48.992100  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:19:48.992182  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:19:48.992279  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:19:48.992290  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:19:48.992317  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:19:48.992420  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:19:48.992430  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:19:48.992453  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:19:48.992503  742315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.newest-cni-058071 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-058071]
	I1123 11:19:49.168901  742315 provision.go:177] copyRemoteCerts
	I1123 11:19:49.169018  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:19:49.169113  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.219548  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:49.333289  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:19:49.353433  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:19:49.372314  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:19:49.392569  742315 provision.go:87] duration metric: took 418.573025ms to configureAuth
	I1123 11:19:49.392609  742315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:19:49.392854  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:49.392993  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.411998  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:49.412335  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:49.412356  742315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:19:49.762536  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:19:49.762562  742315 machine.go:97] duration metric: took 4.439158639s to provisionDockerMachine
	I1123 11:19:49.762575  742315 start.go:293] postStartSetup for "newest-cni-058071" (driver="docker")
	I1123 11:19:49.762587  742315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:19:49.762670  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:19:49.762719  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.780214  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:49.889878  742315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:19:49.893471  742315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:19:49.893550  742315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:19:49.893570  742315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:19:49.893624  742315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:19:49.893705  742315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:19:49.893808  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:19:49.901459  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:49.920044  742315 start.go:296] duration metric: took 157.452391ms for postStartSetup
	I1123 11:19:49.920169  742315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:19:49.920240  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.938475  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.043034  742315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:19:50.048308  742315 fix.go:56] duration metric: took 5.202099069s for fixHost
	I1123 11:19:50.048334  742315 start.go:83] releasing machines lock for "newest-cni-058071", held for 5.20215708s
	I1123 11:19:50.048453  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:50.066857  742315 ssh_runner.go:195] Run: cat /version.json
	I1123 11:19:50.066917  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:50.066926  742315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:19:50.067013  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:50.100221  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.101997  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.295444  742315 ssh_runner.go:195] Run: systemctl --version
	I1123 11:19:50.301801  742315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:19:50.338619  742315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:19:50.342949  742315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:19:50.343054  742315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:19:50.351186  742315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:19:50.351212  742315 start.go:496] detecting cgroup driver to use...
	I1123 11:19:50.351269  742315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:19:50.351347  742315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:19:50.367066  742315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:19:50.381479  742315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:19:50.381581  742315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:19:50.399390  742315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:19:50.413833  742315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:19:50.526594  742315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:19:50.650905  742315 docker.go:234] disabling docker service ...
	I1123 11:19:50.651029  742315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:19:50.668907  742315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:19:50.683792  742315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:19:50.813878  742315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:19:50.941111  742315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:19:50.954589  742315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:19:50.969124  742315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:19:50.969233  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.978239  742315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:19:50.978310  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.987886  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.997715  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.009217  742315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:19:51.019070  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.030345  742315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.040370  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.051079  742315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:19:51.059983  742315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:19:51.070139  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:51.242835  742315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:19:51.466880  742315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:19:51.466954  742315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:19:51.473586  742315 start.go:564] Will wait 60s for crictl version
	I1123 11:19:51.473743  742315 ssh_runner.go:195] Run: which crictl
	I1123 11:19:51.479330  742315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:19:51.509369  742315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:19:51.509577  742315 ssh_runner.go:195] Run: crio --version
	I1123 11:19:51.540482  742315 ssh_runner.go:195] Run: crio --version
	I1123 11:19:51.573187  742315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:19:51.576058  742315 cli_runner.go:164] Run: docker network inspect newest-cni-058071 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:19:51.596104  742315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:19:51.600564  742315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:51.614496  742315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 11:19:51.617613  742315 kubeadm.go:884] updating cluster {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:19:51.617764  742315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:51.617839  742315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:51.655314  742315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:51.655339  742315 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:19:51.655432  742315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:51.685147  742315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:51.685170  742315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:19:51.685178  742315 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:19:51.685285  742315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-058071 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:19:51.685375  742315 ssh_runner.go:195] Run: crio config
	I1123 11:19:51.743255  742315 cni.go:84] Creating CNI manager for ""
	I1123 11:19:51.743285  742315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:51.743310  742315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 11:19:51.743335  742315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-058071 NodeName:newest-cni-058071 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:19:51.743471  742315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-058071"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:19:51.743557  742315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:19:51.753883  742315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:19:51.754006  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:19:51.762325  742315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 11:19:51.775712  742315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:19:51.788529  742315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 11:19:51.804648  742315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:19:51.809303  742315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:51.821570  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:51.938837  742315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:51.957972  742315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071 for IP: 192.168.76.2
	I1123 11:19:51.958035  742315 certs.go:195] generating shared ca certs ...
	I1123 11:19:51.958066  742315 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:51.958226  742315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:19:51.958310  742315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:19:51.958343  742315 certs.go:257] generating profile certs ...
	I1123 11:19:51.958450  742315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.key
	I1123 11:19:51.958593  742315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key.cc862dfe
	I1123 11:19:51.958672  742315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key
	I1123 11:19:51.958808  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:19:51.958872  742315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:19:51.958899  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:19:51.958958  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:19:51.959016  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:19:51.959072  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:19:51.959151  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:51.959843  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:19:51.980033  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:19:52.000104  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:19:52.023963  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:19:52.047526  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:19:52.069834  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:19:52.095636  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:19:52.128764  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:19:52.158765  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:19:52.179578  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:19:52.200119  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:19:52.219939  742315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:19:52.233651  742315 ssh_runner.go:195] Run: openssl version
	I1123 11:19:52.239968  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:19:52.248699  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.252974  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.253097  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.296708  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:19:52.306614  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:19:52.314774  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.318587  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.318708  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.359535  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:19:52.367601  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:19:52.375829  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.379462  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.379598  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.424527  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:19:52.432595  742315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:19:52.436406  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:19:52.478133  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:19:52.519288  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:19:52.560663  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:19:52.611632  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:19:52.684174  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:19:52.766400  742315 kubeadm.go:401] StartCluster: {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:52.766503  742315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:19:52.766621  742315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:19:52.821235  742315 cri.go:89] found id: "760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d"
	I1123 11:19:52.821260  742315 cri.go:89] found id: "04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18"
	I1123 11:19:52.821266  742315 cri.go:89] found id: "0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6"
	I1123 11:19:52.821270  742315 cri.go:89] found id: "4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde"
	I1123 11:19:52.821278  742315 cri.go:89] found id: ""
	I1123 11:19:52.821361  742315 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:19:52.846270  742315 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:19:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:19:52.846386  742315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:19:52.863485  742315 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:19:52.863558  742315 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:19:52.863650  742315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:19:52.881820  742315 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:19:52.882496  742315 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-058071" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:52.882823  742315 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-058071" cluster setting kubeconfig missing "newest-cni-058071" context setting]
	I1123 11:19:52.883361  742315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.885199  742315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:19:52.898096  742315 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 11:19:52.898177  742315 kubeadm.go:602] duration metric: took 34.598927ms to restartPrimaryControlPlane
	I1123 11:19:52.898243  742315 kubeadm.go:403] duration metric: took 131.853098ms to StartCluster
	I1123 11:19:52.898279  742315 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.898368  742315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:52.899447  742315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.899741  742315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:19:52.900274  742315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:19:52.900357  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:52.900367  742315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-058071"
	I1123 11:19:52.900382  742315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-058071"
	W1123 11:19:52.900388  742315 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:19:52.900413  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.900418  742315 addons.go:70] Setting dashboard=true in profile "newest-cni-058071"
	I1123 11:19:52.900430  742315 addons.go:239] Setting addon dashboard=true in "newest-cni-058071"
	W1123 11:19:52.900436  742315 addons.go:248] addon dashboard should already be in state true
	I1123 11:19:52.900455  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.900890  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.901140  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.901374  742315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-058071"
	I1123 11:19:52.901400  742315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-058071"
	I1123 11:19:52.902124  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.905999  742315 out.go:179] * Verifying Kubernetes components...
	I1123 11:19:52.909155  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:52.942466  742315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-058071"
	W1123 11:19:52.942488  742315 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:19:52.942512  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.942959  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.980854  742315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:19:52.983078  742315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:19:52.986214  742315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:19:52.986266  742315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:52.986282  742315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:19:52.986350  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:52.990630  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:19:52.990653  742315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:19:52.990727  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:52.995804  742315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:52.995839  742315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:19:52.995980  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:53.048306  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.058976  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.071262  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.277894  742315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:53.304530  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:53.318473  742315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:19:53.318551  742315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:19:53.351895  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:53.374720  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:19:53.374745  742315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:19:53.482645  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:19:53.482670  742315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:19:53.523581  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:19:53.523606  742315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:19:53.544973  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:19:53.544999  742315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:19:53.568172  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:19:53.568197  742315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:19:53.591500  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:19:53.591524  742315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:19:53.610849  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:19:53.610873  742315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:19:53.634614  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:19:53.634640  742315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:19:53.659063  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:19:53.659089  742315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:19:53.682748  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:19:58.501367  742315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.182790758s)
	I1123 11:19:58.501419  742315 api_server.go:72] duration metric: took 5.601602791s to wait for apiserver process to appear ...
	I1123 11:19:58.501429  742315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:19:58.501450  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:19:58.501445  742315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.196844886s)
	I1123 11:19:58.619539  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:19:58.619568  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:19:59.001572  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:19:59.054629  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:19:59.054662  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:19:59.501530  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:19:59.548822  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:19:59.548847  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:00.001524  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:20:00.062415  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:20:00.062445  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:00.503790  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:20:00.542533  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:20:00.542564  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:00.639996  742315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.288042839s)
	I1123 11:20:00.895296  742315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.212501939s)
	I1123 11:20:00.898598  742315 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-058071 addons enable metrics-server
	
	I1123 11:20:00.901697  742315 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 11:20:00.904683  742315 addons.go:530] duration metric: took 8.004403804s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 11:20:01.001795  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:20:01.010745  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:20:01.011963  742315 api_server.go:141] control plane version: v1.34.1
	I1123 11:20:01.011987  742315 api_server.go:131] duration metric: took 2.510550092s to wait for apiserver health ...
	I1123 11:20:01.011996  742315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:20:01.019933  742315 system_pods.go:59] 8 kube-system pods found
	I1123 11:20:01.019969  742315 system_pods.go:61] "coredns-66bc5c9577-86c67" [654888ae-1968-446b-bc77-67add47f1646] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:20:01.019978  742315 system_pods.go:61] "etcd-newest-cni-058071" [880c7442-4504-4d3f-bd99-5da4d55fc969] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:20:01.019983  742315 system_pods.go:61] "kindnet-nhmmf" [3a4984b0-33ea-41b8-bcf0-371db0376a23] Running
	I1123 11:20:01.019990  742315 system_pods.go:61] "kube-apiserver-newest-cni-058071" [057ca3d0-73ae-4a19-91e6-c4d4be793d23] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:20:01.019996  742315 system_pods.go:61] "kube-controller-manager-newest-cni-058071" [1b498c1b-0b85-4f48-a741-21e62c3ee4b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:20:01.020000  742315 system_pods.go:61] "kube-proxy-k574z" [5d8ab6d1-c0c9-4f98-a624-cee178c49a77] Running
	I1123 11:20:01.020006  742315 system_pods.go:61] "kube-scheduler-newest-cni-058071" [b006970c-6ef8-4240-b994-0c68b254d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:20:01.020016  742315 system_pods.go:61] "storage-provisioner" [44fe1c1c-dd81-4733-a2e9-a014c419bd7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:20:01.020022  742315 system_pods.go:74] duration metric: took 8.01996ms to wait for pod list to return data ...
	I1123 11:20:01.020031  742315 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:20:01.035823  742315 default_sa.go:45] found service account: "default"
	I1123 11:20:01.035848  742315 default_sa.go:55] duration metric: took 15.811106ms for default service account to be created ...
	I1123 11:20:01.035863  742315 kubeadm.go:587] duration metric: took 8.136060078s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:20:01.035880  742315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:20:01.044137  742315 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:20:01.044172  742315 node_conditions.go:123] node cpu capacity is 2
	I1123 11:20:01.044186  742315 node_conditions.go:105] duration metric: took 8.300311ms to run NodePressure ...
	I1123 11:20:01.044201  742315 start.go:242] waiting for startup goroutines ...
	I1123 11:20:01.044209  742315 start.go:247] waiting for cluster config update ...
	I1123 11:20:01.044220  742315 start.go:256] writing updated cluster config ...
	I1123 11:20:01.044518  742315 ssh_runner.go:195] Run: rm -f paused
	I1123 11:20:01.181428  742315 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:20:01.185170  742315 out.go:179] * Done! kubectl is now configured to use "newest-cni-058071" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.373643629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.390401375Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7da18bd0-98c3-4e3a-a755-7e25cc775509 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.396667579Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-k574z/POD" id=aed013f3-2214-4a14-84a3-83d282e4b1b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.396758732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.442569533Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=aed013f3-2214-4a14-84a3-83d282e4b1b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.461298681Z" level=info msg="Ran pod sandbox 149ec80d08a020c882b7ec98b97986a19acf9e4d05f817823d9aa230b0637a2c with infra container: kube-system/kindnet-nhmmf/POD" id=7da18bd0-98c3-4e3a-a755-7e25cc775509 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.463077996Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=bf97f7a4-f6d8-4295-a1a1-fce4346110f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.492033514Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=78b54d0e-1670-498d-9fc5-055efe629161 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.531249611Z" level=info msg="Creating container: kube-system/kindnet-nhmmf/kindnet-cni" id=7d67bced-875c-41ae-8772-1efc8c3b573b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.532300064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.565666687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.566754318Z" level=info msg="Ran pod sandbox c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29 with infra container: kube-system/kube-proxy-k574z/POD" id=aed013f3-2214-4a14-84a3-83d282e4b1b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.573072627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=78d6c802-7201-4d97-9a0e-d7fe4db21062 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.57499524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.577548726Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cea845e4-c845-4300-91c3-5a901cc92316 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.581017722Z" level=info msg="Creating container: kube-system/kube-proxy-k574z/kube-proxy" id=ae7f3fbc-12b7-4fd0-97af-b231d1eb2187 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.581158394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.615976513Z" level=info msg="Created container 5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee: kube-system/kindnet-nhmmf/kindnet-cni" id=7d67bced-875c-41ae-8772-1efc8c3b573b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.631379955Z" level=info msg="Starting container: 5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee" id=69e47671-b245-467d-a399-54a10a71aa43 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.644008999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.64854983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.64878058Z" level=info msg="Started container" PID=1065 containerID=5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee description=kube-system/kindnet-nhmmf/kindnet-cni id=69e47671-b245-467d-a399-54a10a71aa43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=149ec80d08a020c882b7ec98b97986a19acf9e4d05f817823d9aa230b0637a2c
	Nov 23 11:20:00 newest-cni-058071 crio[614]: time="2025-11-23T11:20:00.195423946Z" level=info msg="Created container 07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c: kube-system/kube-proxy-k574z/kube-proxy" id=ae7f3fbc-12b7-4fd0-97af-b231d1eb2187 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:20:00 newest-cni-058071 crio[614]: time="2025-11-23T11:20:00.196786579Z" level=info msg="Starting container: 07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c" id=62de9c5d-dd60-4fa0-b5d6-39ec5053e4c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:20:00 newest-cni-058071 crio[614]: time="2025-11-23T11:20:00.208916673Z" level=info msg="Started container" PID=1075 containerID=07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c description=kube-system/kube-proxy-k574z/kube-proxy id=62de9c5d-dd60-4fa0-b5d6-39ec5053e4c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	07540e9c5dbcc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   c79dc3cd9723d       kube-proxy-k574z                            kube-system
	5f85b9911c0ac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   149ec80d08a02       kindnet-nhmmf                               kube-system
	760f7a89b92dc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   eddb051c8a14d       kube-scheduler-newest-cni-058071            kube-system
	04cc7cb59b36d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   05865354e533c       kube-apiserver-newest-cni-058071            kube-system
	0666c2f1ccc45       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   3c53b836c53e5       kube-controller-manager-newest-cni-058071   kube-system
	4290d47514723       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   76da4c951e631       etcd-newest-cni-058071                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-058071
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-058071
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=newest-cni-058071
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_19_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:19:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-058071
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:19:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-058071
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                50c4c8d6-c4e7-4ed0-b751-2e5f93061714
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-058071                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-nhmmf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-058071             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-058071    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-k574z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-058071             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-058071 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-058071 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-058071 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-058071 event: Registered Node newest-cni-058071 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-058071 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-058071 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-058071 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-058071 event: Registered Node newest-cni-058071 in Controller
	
	
	==> dmesg <==
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	[Nov23 11:19] overlayfs: idmapped layers are currently not supported
	[ +26.182636] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde] <==
	{"level":"warn","ts":"2025-11-23T11:19:55.789011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.800457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.833125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.860581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.878206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.897595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.914424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.931594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.948986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.968466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.989511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.023700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.031282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.045875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.063289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.080770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.102690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.115205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.172335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.178313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.202811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.241683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.255031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.274714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.327918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:20:05 up  4:02,  0 user,  load average: 4.59, 3.76, 3.09
	Linux newest-cni-058071 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee] <==
	I1123 11:19:59.794037       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:19:59.794532       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:19:59.794646       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:19:59.794657       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:19:59.794668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:19:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:20:00.004054       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:20:00.004094       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:20:00.004108       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:20:00.004653       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18] <==
	I1123 11:19:58.150626       1 aggregator.go:171] initial CRD sync complete...
	I1123 11:19:58.150660       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 11:19:58.150669       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:19:58.150888       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:19:58.150897       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 11:19:58.155030       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:19:58.169571       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:19:58.170631       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:19:58.170893       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:19:58.221543       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:19:58.251346       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:19:58.255938       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:19:58.264698       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:19:58.419644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:19:59.166591       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:19:59.433245       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:19:59.832062       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:20:00.043080       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:20:00.174108       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:20:00.814354       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.81.196"}
	I1123 11:20:00.878119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.117.182"}
	I1123 11:20:03.056677       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:20:03.108981       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:20:03.482861       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:20:03.532084       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6] <==
	I1123 11:20:02.980439       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 11:20:02.980740       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:20:02.980775       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 11:20:02.984888       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:20:02.992342       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 11:20:02.997638       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:20:02.998889       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:20:03.000148       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:03.002692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:03.002723       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:20:03.002732       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:20:03.006924       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 11:20:03.020378       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:20:03.022327       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:20:03.022444       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:20:03.022525       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-058071"
	I1123 11:20:03.022580       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 11:20:03.023747       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:20:03.023802       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:20:03.024883       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:20:03.024939       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:20:03.026802       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:20:03.037323       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:20:03.040973       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:20:03.049164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c] <==
	I1123 11:20:00.905641       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:20:01.004768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:20:01.107332       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:20:01.107740       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:20:01.107886       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:20:01.154917       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:20:01.154979       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:20:01.167822       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:20:01.168337       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:20:01.168359       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:20:01.170393       1 config.go:200] "Starting service config controller"
	I1123 11:20:01.170417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:20:01.170437       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:20:01.170441       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:20:01.170454       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:20:01.170458       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:20:01.171201       1 config.go:309] "Starting node config controller"
	I1123 11:20:01.171221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:20:01.171229       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:20:01.271450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:20:01.271500       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:20:01.271547       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d] <==
	I1123 11:19:56.080488       1 serving.go:386] Generated self-signed cert in-memory
	I1123 11:20:00.651932       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:20:00.652058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:20:00.676187       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 11:20:00.676243       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 11:20:00.676281       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:20:00.676297       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:20:00.676322       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:20:00.676341       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:20:00.686884       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:20:00.694579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:20:00.792529       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 11:20:00.792776       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:20:00.805792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:19:56 newest-cni-058071 kubelet[736]: E1123 11:19:56.516064     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-058071\" not found" node="newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.097466     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301101     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301215     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301254     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.301695     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-058071\" already exists" pod="kube-system/etcd-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301721     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.305443     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.336460     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-058071\" already exists" pod="kube-system/kube-apiserver-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.336497     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.405610     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-058071\" already exists" pod="kube-system/kube-controller-manager-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.405648     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.461694     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-058071\" already exists" pod="kube-system/kube-scheduler-newest-cni-058071"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.057074     736 apiserver.go:52] "Watching apiserver"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.099184     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144058     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-xtables-lock\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144145     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-xtables-lock\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144180     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-lib-modules\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144220     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-lib-modules\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144248     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-cni-cfg\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.211369     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: W1123 11:19:59.544003     736 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/crio-c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29 WatchSource:0}: Error finding container c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29: Status 404 returned error can't find the container with id c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29
	Nov 23 11:20:02 newest-cni-058071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:20:02 newest-cni-058071 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:20:02 newest-cni-058071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-058071 -n newest-cni-058071
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-058071 -n newest-cni-058071: exit status 2 (384.717346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-058071 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c: exit status 1 (85.186398ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-86c67" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hm74w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-48h4c" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-058071
helpers_test.go:243: (dbg) docker inspect newest-cni-058071:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559",
	        "Created": "2025-11-23T11:19:09.249053007Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:19:44.898360895Z",
	            "FinishedAt": "2025-11-23T11:19:43.783195868Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/hostname",
	        "HostsPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/hosts",
	        "LogPath": "/var/lib/docker/containers/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559-json.log",
	        "Name": "/newest-cni-058071",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-058071:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-058071",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559",
	                "LowerDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf656ec379770143aaf90cb6eb9c98557e5c65381c7f881794044040d934dc54/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-058071",
	                "Source": "/var/lib/docker/volumes/newest-cni-058071/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-058071",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-058071",
	                "name.minikube.sigs.k8s.io": "newest-cni-058071",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cef97981e41c9c065972f9da778d46d1f0dad13645f8bdf5cf9e4cfacbae35be",
	            "SandboxKey": "/var/run/docker/netns/cef97981e41c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-058071": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:83:9f:82:b3:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2ad1d74afe18771af1930500adbc0606f203b00728de9cd7c808850d196bbca",
	                    "EndpointID": "76e41c0ce156f5c877caedc39e1acad3dcf7c4cd0fef409771342ce9b26ce59d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-058071",
	                        "80b941940765"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071: exit status 2 (373.430611ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-058071 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-058071 logs -n 25: (1.085169381s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-715679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │                     │
	│ stop    │ -p embed-certs-715679 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:17 UTC │
	│ start   │ -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:17 UTC │ 23 Nov 25 11:18 UTC │
	│ image   │ no-preload-258179 image list --format=json                                                                                                                                                                                                    │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-258179 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p newest-cni-058071 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-058071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103096 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │                     │
	│ image   │ newest-cni-058071 image list --format=json                                                                                                                                                                                                    │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ pause   │ -p newest-cni-058071 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:19:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:19:44.618325  742315 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:19:44.618459  742315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:44.618469  742315 out.go:374] Setting ErrFile to fd 2...
	I1123 11:19:44.618475  742315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:19:44.618726  742315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:19:44.619087  742315 out.go:368] Setting JSON to false
	I1123 11:19:44.619971  742315 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14534,"bootTime":1763882251,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:19:44.620038  742315 start.go:143] virtualization:  
	I1123 11:19:44.623243  742315 out.go:179] * [newest-cni-058071] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:19:44.627248  742315 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:19:44.627491  742315 notify.go:221] Checking for updates...
	I1123 11:19:44.633060  742315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:19:44.636027  742315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:44.638930  742315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:19:44.641896  742315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:19:44.644731  742315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:19:44.648089  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:44.648716  742315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:19:44.671629  742315 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:19:44.671751  742315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:44.738265  742315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:44.727634366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:44.738371  742315 docker.go:319] overlay module found
	I1123 11:19:44.743346  742315 out.go:179] * Using the docker driver based on existing profile
	I1123 11:19:44.746111  742315 start.go:309] selected driver: docker
	I1123 11:19:44.746128  742315 start.go:927] validating driver "docker" against &{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:44.746249  742315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:19:44.750357  742315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:19:44.807038  742315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:19:44.797465189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:19:44.807374  742315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:19:44.807404  742315 cni.go:84] Creating CNI manager for ""
	I1123 11:19:44.807461  742315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:44.807504  742315 start.go:353] cluster config:
	{Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:44.812512  742315 out.go:179] * Starting "newest-cni-058071" primary control-plane node in "newest-cni-058071" cluster
	I1123 11:19:44.815290  742315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:19:44.818178  742315 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:19:44.820979  742315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:44.821031  742315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:19:44.821041  742315 cache.go:65] Caching tarball of preloaded images
	I1123 11:19:44.821068  742315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:19:44.821137  742315 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:19:44.821147  742315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:19:44.821259  742315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:44.846039  742315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:19:44.846062  742315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:19:44.846078  742315 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:19:44.846108  742315 start.go:360] acquireMachinesLock for newest-cni-058071: {Name:mkcc8b04939d321e7fa14f673dfa688f531ff5df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:19:44.846163  742315 start.go:364] duration metric: took 35.029µs to acquireMachinesLock for "newest-cni-058071"
	I1123 11:19:44.846188  742315 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:19:44.846201  742315 fix.go:54] fixHost starting: 
	I1123 11:19:44.846456  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:44.863432  742315 fix.go:112] recreateIfNeeded on newest-cni-058071: state=Stopped err=<nil>
	W1123 11:19:44.863463  742315 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:19:41.409289  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	W1123 11:19:43.908137  735340 node_ready.go:57] node "default-k8s-diff-port-103096" has "Ready":"False" status (will retry)
	I1123 11:19:45.915466  735340 node_ready.go:49] node "default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:45.915497  735340 node_ready.go:38] duration metric: took 40.010059173s for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:19:45.915513  735340 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:19:45.915574  735340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:19:45.933170  735340 api_server.go:72] duration metric: took 42.004976922s to wait for apiserver process to appear ...
	I1123 11:19:45.933198  735340 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:19:45.933220  735340 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:19:45.962722  735340 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 11:19:45.965518  735340 api_server.go:141] control plane version: v1.34.1
	I1123 11:19:45.965548  735340 api_server.go:131] duration metric: took 32.341977ms to wait for apiserver health ...
	I1123 11:19:45.965557  735340 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:19:45.986799  735340 system_pods.go:59] 8 kube-system pods found
	I1123 11:19:45.986840  735340 system_pods.go:61] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending
	I1123 11:19:45.986864  735340 system_pods.go:61] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:45.986911  735340 system_pods.go:61] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:45.986932  735340 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:45.986937  735340 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:45.986941  735340 system_pods.go:61] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:45.986945  735340 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:45.986962  735340 system_pods.go:61] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:45.986982  735340 system_pods.go:74] duration metric: took 21.411513ms to wait for pod list to return data ...
	I1123 11:19:45.986997  735340 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:19:45.989965  735340 default_sa.go:45] found service account: "default"
	I1123 11:19:45.990037  735340 default_sa.go:55] duration metric: took 3.032498ms for default service account to be created ...
	I1123 11:19:45.990062  735340 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:19:45.997322  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:45.997456  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending
	I1123 11:19:45.997482  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:45.997506  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:45.997545  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:45.997571  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:45.997593  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:45.997632  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:45.997659  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:45.997719  735340 retry.go:31] will retry after 223.844429ms: missing components: kube-dns
	I1123 11:19:46.226266  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.226302  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:19:46.226310  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.226316  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.226339  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.226372  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.226383  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.226387  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.226393  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:46.226415  735340 retry.go:31] will retry after 269.174574ms: missing components: kube-dns
	I1123 11:19:46.503566  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.503648  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:19:46.503680  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.503702  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.503731  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.503763  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.503788  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.503810  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.503845  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:19:46.503874  735340 retry.go:31] will retry after 349.134365ms: missing components: kube-dns
	I1123 11:19:46.857167  735340 system_pods.go:86] 8 kube-system pods found
	I1123 11:19:46.857257  735340 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Running
	I1123 11:19:46.857290  735340 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running
	I1123 11:19:46.857313  735340 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:19:46.857335  735340 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running
	I1123 11:19:46.857356  735340 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:19:46.857388  735340 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:19:46.857443  735340 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running
	I1123 11:19:46.857454  735340 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Running
	I1123 11:19:46.857464  735340 system_pods.go:126] duration metric: took 867.382706ms to wait for k8s-apps to be running ...
	I1123 11:19:46.857471  735340 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:19:46.857565  735340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:19:46.871621  735340 system_svc.go:56] duration metric: took 14.138981ms WaitForService to wait for kubelet
	I1123 11:19:46.871693  735340 kubeadm.go:587] duration metric: took 42.94350422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:19:46.871718  735340 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:19:46.874817  735340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:19:46.874848  735340 node_conditions.go:123] node cpu capacity is 2
	I1123 11:19:46.874862  735340 node_conditions.go:105] duration metric: took 3.137698ms to run NodePressure ...
	I1123 11:19:46.874875  735340 start.go:242] waiting for startup goroutines ...
	I1123 11:19:46.874883  735340 start.go:247] waiting for cluster config update ...
	I1123 11:19:46.874900  735340 start.go:256] writing updated cluster config ...
	I1123 11:19:46.875232  735340 ssh_runner.go:195] Run: rm -f paused
	I1123 11:19:46.878961  735340 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:19:46.957386  735340 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.962693  735340 pod_ready.go:94] pod "coredns-66bc5c9577-jxjjg" is "Ready"
	I1123 11:19:46.962731  735340 pod_ready.go:86] duration metric: took 5.28005ms for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.965268  735340 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.969979  735340 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:46.970010  735340 pod_ready.go:86] duration metric: took 4.715712ms for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.972372  735340 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.976670  735340 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:46.976698  735340 pod_ready.go:86] duration metric: took 4.302763ms for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:46.979034  735340 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.283559  735340 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:47.283586  735340 pod_ready.go:86] duration metric: took 304.480419ms for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.482856  735340 pod_ready.go:83] waiting for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:47.883105  735340 pod_ready.go:94] pod "kube-proxy-kp7fv" is "Ready"
	I1123 11:19:47.883132  735340 pod_ready.go:86] duration metric: took 400.237422ms for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.083580  735340 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.482628  735340 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103096" is "Ready"
	I1123 11:19:48.482672  735340 pod_ready.go:86] duration metric: took 399.055275ms for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:19:48.482687  735340 pod_ready.go:40] duration metric: took 1.603691622s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:19:48.568932  735340 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:19:48.572293  735340 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103096" cluster and "default" namespace by default
	I1123 11:19:44.866695  742315 out.go:252] * Restarting existing docker container for "newest-cni-058071" ...
	I1123 11:19:44.866781  742315 cli_runner.go:164] Run: docker start newest-cni-058071
	I1123 11:19:45.269045  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:45.296713  742315 kic.go:430] container "newest-cni-058071" state is running.
	I1123 11:19:45.297507  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:45.323006  742315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/config.json ...
	I1123 11:19:45.323390  742315 machine.go:94] provisionDockerMachine start ...
	I1123 11:19:45.323513  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:45.350795  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:45.351325  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:45.351340  742315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:19:45.353393  742315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:19:48.507891  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:48.507913  742315 ubuntu.go:182] provisioning hostname "newest-cni-058071"
	I1123 11:19:48.507976  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:48.534692  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:48.535018  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:48.535031  742315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-058071 && echo "newest-cni-058071" | sudo tee /etc/hostname
	I1123 11:19:48.752756  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-058071
	
	I1123 11:19:48.752833  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:48.803242  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:48.803544  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:48.803562  742315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-058071' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-058071/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-058071' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:19:48.973866  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:19:48.973934  742315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:19:48.973963  742315 ubuntu.go:190] setting up certificates
	I1123 11:19:48.973973  742315 provision.go:84] configureAuth start
	I1123 11:19:48.974067  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:48.991997  742315 provision.go:143] copyHostCerts
	I1123 11:19:48.992073  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:19:48.992100  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:19:48.992182  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:19:48.992279  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:19:48.992290  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:19:48.992317  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:19:48.992420  742315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:19:48.992430  742315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:19:48.992453  742315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:19:48.992503  742315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.newest-cni-058071 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-058071]
	I1123 11:19:49.168901  742315 provision.go:177] copyRemoteCerts
	I1123 11:19:49.169018  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:19:49.169113  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.219548  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:49.333289  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:19:49.353433  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:19:49.372314  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:19:49.392569  742315 provision.go:87] duration metric: took 418.573025ms to configureAuth
	I1123 11:19:49.392609  742315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:19:49.392854  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:49.392993  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.411998  742315 main.go:143] libmachine: Using SSH client type: native
	I1123 11:19:49.412335  742315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1123 11:19:49.412356  742315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:19:49.762536  742315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:19:49.762562  742315 machine.go:97] duration metric: took 4.439158639s to provisionDockerMachine
	I1123 11:19:49.762575  742315 start.go:293] postStartSetup for "newest-cni-058071" (driver="docker")
	I1123 11:19:49.762587  742315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:19:49.762670  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:19:49.762719  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.780214  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:49.889878  742315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:19:49.893471  742315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:19:49.893550  742315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:19:49.893570  742315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:19:49.893624  742315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:19:49.893705  742315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:19:49.893808  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:19:49.901459  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:49.920044  742315 start.go:296] duration metric: took 157.452391ms for postStartSetup
	I1123 11:19:49.920169  742315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:19:49.920240  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:49.938475  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.043034  742315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:19:50.048308  742315 fix.go:56] duration metric: took 5.202099069s for fixHost
	I1123 11:19:50.048334  742315 start.go:83] releasing machines lock for "newest-cni-058071", held for 5.20215708s
	I1123 11:19:50.048453  742315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-058071
	I1123 11:19:50.066857  742315 ssh_runner.go:195] Run: cat /version.json
	I1123 11:19:50.066917  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:50.066926  742315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:19:50.067013  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:50.100221  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.101997  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:50.295444  742315 ssh_runner.go:195] Run: systemctl --version
	I1123 11:19:50.301801  742315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:19:50.338619  742315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:19:50.342949  742315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:19:50.343054  742315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:19:50.351186  742315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:19:50.351212  742315 start.go:496] detecting cgroup driver to use...
	I1123 11:19:50.351269  742315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:19:50.351347  742315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:19:50.367066  742315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:19:50.381479  742315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:19:50.381581  742315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:19:50.399390  742315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:19:50.413833  742315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:19:50.526594  742315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:19:50.650905  742315 docker.go:234] disabling docker service ...
	I1123 11:19:50.651029  742315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:19:50.668907  742315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:19:50.683792  742315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:19:50.813878  742315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:19:50.941111  742315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:19:50.954589  742315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:19:50.969124  742315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:19:50.969233  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.978239  742315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:19:50.978310  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.987886  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:50.997715  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.009217  742315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:19:51.019070  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.030345  742315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.040370  742315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:19:51.051079  742315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:19:51.059983  742315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:19:51.070139  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:51.242835  742315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:19:51.466880  742315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:19:51.466954  742315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:19:51.473586  742315 start.go:564] Will wait 60s for crictl version
	I1123 11:19:51.473743  742315 ssh_runner.go:195] Run: which crictl
	I1123 11:19:51.479330  742315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:19:51.509369  742315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:19:51.509577  742315 ssh_runner.go:195] Run: crio --version
	I1123 11:19:51.540482  742315 ssh_runner.go:195] Run: crio --version
	I1123 11:19:51.573187  742315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:19:51.576058  742315 cli_runner.go:164] Run: docker network inspect newest-cni-058071 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:19:51.596104  742315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:19:51.600564  742315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:51.614496  742315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 11:19:51.617613  742315 kubeadm.go:884] updating cluster {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:19:51.617764  742315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:19:51.617839  742315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:51.655314  742315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:51.655339  742315 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:19:51.655432  742315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:19:51.685147  742315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:19:51.685170  742315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:19:51.685178  742315 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:19:51.685285  742315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-058071 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:19:51.685375  742315 ssh_runner.go:195] Run: crio config
	I1123 11:19:51.743255  742315 cni.go:84] Creating CNI manager for ""
	I1123 11:19:51.743285  742315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:19:51.743310  742315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 11:19:51.743335  742315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-058071 NodeName:newest-cni-058071 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:19:51.743471  742315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-058071"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:19:51.743557  742315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:19:51.753883  742315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:19:51.754006  742315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:19:51.762325  742315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 11:19:51.775712  742315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:19:51.788529  742315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 11:19:51.804648  742315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:19:51.809303  742315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:19:51.821570  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:51.938837  742315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:51.957972  742315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071 for IP: 192.168.76.2
	I1123 11:19:51.958035  742315 certs.go:195] generating shared ca certs ...
	I1123 11:19:51.958066  742315 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:51.958226  742315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:19:51.958310  742315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:19:51.958343  742315 certs.go:257] generating profile certs ...
	I1123 11:19:51.958450  742315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/client.key
	I1123 11:19:51.958593  742315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key.cc862dfe
	I1123 11:19:51.958672  742315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key
	I1123 11:19:51.958808  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:19:51.958872  742315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:19:51.958899  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:19:51.958958  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:19:51.959016  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:19:51.959072  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:19:51.959151  742315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:19:51.959843  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:19:51.980033  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:19:52.000104  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:19:52.023963  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:19:52.047526  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:19:52.069834  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:19:52.095636  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:19:52.128764  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/newest-cni-058071/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:19:52.158765  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:19:52.179578  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:19:52.200119  742315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:19:52.219939  742315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:19:52.233651  742315 ssh_runner.go:195] Run: openssl version
	I1123 11:19:52.239968  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:19:52.248699  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.252974  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.253097  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:19:52.296708  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:19:52.306614  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:19:52.314774  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.318587  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.318708  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:19:52.359535  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:19:52.367601  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:19:52.375829  742315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.379462  742315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.379598  742315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:19:52.424527  742315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:19:52.432595  742315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:19:52.436406  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:19:52.478133  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:19:52.519288  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:19:52.560663  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:19:52.611632  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:19:52.684174  742315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:19:52.766400  742315 kubeadm.go:401] StartCluster: {Name:newest-cni-058071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-058071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:19:52.766503  742315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:19:52.766621  742315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:19:52.821235  742315 cri.go:89] found id: "760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d"
	I1123 11:19:52.821260  742315 cri.go:89] found id: "04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18"
	I1123 11:19:52.821266  742315 cri.go:89] found id: "0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6"
	I1123 11:19:52.821270  742315 cri.go:89] found id: "4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde"
	I1123 11:19:52.821278  742315 cri.go:89] found id: ""
	I1123 11:19:52.821361  742315 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:19:52.846270  742315 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:19:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:19:52.846386  742315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:19:52.863485  742315 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:19:52.863558  742315 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:19:52.863650  742315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:19:52.881820  742315 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:19:52.882496  742315 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-058071" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:52.882823  742315 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-058071" cluster setting kubeconfig missing "newest-cni-058071" context setting]
	I1123 11:19:52.883361  742315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.885199  742315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:19:52.898096  742315 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 11:19:52.898177  742315 kubeadm.go:602] duration metric: took 34.598927ms to restartPrimaryControlPlane
	I1123 11:19:52.898243  742315 kubeadm.go:403] duration metric: took 131.853098ms to StartCluster
	I1123 11:19:52.898279  742315 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.898368  742315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:19:52.899447  742315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:19:52.899741  742315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:19:52.900274  742315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:19:52.900357  742315 config.go:182] Loaded profile config "newest-cni-058071": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:19:52.900367  742315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-058071"
	I1123 11:19:52.900382  742315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-058071"
	W1123 11:19:52.900388  742315 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:19:52.900413  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.900418  742315 addons.go:70] Setting dashboard=true in profile "newest-cni-058071"
	I1123 11:19:52.900430  742315 addons.go:239] Setting addon dashboard=true in "newest-cni-058071"
	W1123 11:19:52.900436  742315 addons.go:248] addon dashboard should already be in state true
	I1123 11:19:52.900455  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.900890  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.901140  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.901374  742315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-058071"
	I1123 11:19:52.901400  742315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-058071"
	I1123 11:19:52.902124  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.905999  742315 out.go:179] * Verifying Kubernetes components...
	I1123 11:19:52.909155  742315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:19:52.942466  742315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-058071"
	W1123 11:19:52.942488  742315 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:19:52.942512  742315 host.go:66] Checking if "newest-cni-058071" exists ...
	I1123 11:19:52.942959  742315 cli_runner.go:164] Run: docker container inspect newest-cni-058071 --format={{.State.Status}}
	I1123 11:19:52.980854  742315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:19:52.983078  742315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:19:52.986214  742315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:19:52.986266  742315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:52.986282  742315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:19:52.986350  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:52.990630  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:19:52.990653  742315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:19:52.990727  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:52.995804  742315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:52.995839  742315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:19:52.995980  742315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-058071
	I1123 11:19:53.048306  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.058976  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.071262  742315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/newest-cni-058071/id_rsa Username:docker}
	I1123 11:19:53.277894  742315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:19:53.304530  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:19:53.318473  742315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:19:53.318551  742315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:19:53.351895  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:19:53.374720  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:19:53.374745  742315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:19:53.482645  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:19:53.482670  742315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:19:53.523581  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:19:53.523606  742315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:19:53.544973  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:19:53.544999  742315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:19:53.568172  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:19:53.568197  742315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:19:53.591500  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:19:53.591524  742315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:19:53.610849  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:19:53.610873  742315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:19:53.634614  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:19:53.634640  742315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:19:53.659063  742315 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:19:53.659089  742315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:19:53.682748  742315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:19:58.501367  742315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.182790758s)
	I1123 11:19:58.501419  742315 api_server.go:72] duration metric: took 5.601602791s to wait for apiserver process to appear ...
	I1123 11:19:58.501429  742315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:19:58.501450  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:19:58.501445  742315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.196844886s)
	I1123 11:19:58.619539  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:19:58.619568  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:19:59.001572  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:19:59.054629  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:19:59.054662  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:19:59.501530  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:19:59.548822  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:19:59.548847  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:00.001524  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:20:00.062415  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:20:00.062445  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:00.503790  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:20:00.542533  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:20:00.542564  742315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:00.639996  742315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.288042839s)
	I1123 11:20:00.895296  742315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.212501939s)
	I1123 11:20:00.898598  742315 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-058071 addons enable metrics-server
	
	I1123 11:20:00.901697  742315 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 11:20:00.904683  742315 addons.go:530] duration metric: took 8.004403804s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 11:20:01.001795  742315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:20:01.010745  742315 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:20:01.011963  742315 api_server.go:141] control plane version: v1.34.1
	I1123 11:20:01.011987  742315 api_server.go:131] duration metric: took 2.510550092s to wait for apiserver health ...
	I1123 11:20:01.011996  742315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:20:01.019933  742315 system_pods.go:59] 8 kube-system pods found
	I1123 11:20:01.019969  742315 system_pods.go:61] "coredns-66bc5c9577-86c67" [654888ae-1968-446b-bc77-67add47f1646] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:20:01.019978  742315 system_pods.go:61] "etcd-newest-cni-058071" [880c7442-4504-4d3f-bd99-5da4d55fc969] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:20:01.019983  742315 system_pods.go:61] "kindnet-nhmmf" [3a4984b0-33ea-41b8-bcf0-371db0376a23] Running
	I1123 11:20:01.019990  742315 system_pods.go:61] "kube-apiserver-newest-cni-058071" [057ca3d0-73ae-4a19-91e6-c4d4be793d23] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:20:01.019996  742315 system_pods.go:61] "kube-controller-manager-newest-cni-058071" [1b498c1b-0b85-4f48-a741-21e62c3ee4b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:20:01.020000  742315 system_pods.go:61] "kube-proxy-k574z" [5d8ab6d1-c0c9-4f98-a624-cee178c49a77] Running
	I1123 11:20:01.020006  742315 system_pods.go:61] "kube-scheduler-newest-cni-058071" [b006970c-6ef8-4240-b994-0c68b254d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:20:01.020016  742315 system_pods.go:61] "storage-provisioner" [44fe1c1c-dd81-4733-a2e9-a014c419bd7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:20:01.020022  742315 system_pods.go:74] duration metric: took 8.01996ms to wait for pod list to return data ...
	I1123 11:20:01.020031  742315 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:20:01.035823  742315 default_sa.go:45] found service account: "default"
	I1123 11:20:01.035848  742315 default_sa.go:55] duration metric: took 15.811106ms for default service account to be created ...
	I1123 11:20:01.035863  742315 kubeadm.go:587] duration metric: took 8.136060078s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:20:01.035880  742315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:20:01.044137  742315 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:20:01.044172  742315 node_conditions.go:123] node cpu capacity is 2
	I1123 11:20:01.044186  742315 node_conditions.go:105] duration metric: took 8.300311ms to run NodePressure ...
	I1123 11:20:01.044201  742315 start.go:242] waiting for startup goroutines ...
	I1123 11:20:01.044209  742315 start.go:247] waiting for cluster config update ...
	I1123 11:20:01.044220  742315 start.go:256] writing updated cluster config ...
	I1123 11:20:01.044518  742315 ssh_runner.go:195] Run: rm -f paused
	I1123 11:20:01.181428  742315 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:20:01.185170  742315 out.go:179] * Done! kubectl is now configured to use "newest-cni-058071" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.373643629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.390401375Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7da18bd0-98c3-4e3a-a755-7e25cc775509 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.396667579Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-k574z/POD" id=aed013f3-2214-4a14-84a3-83d282e4b1b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.396758732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.442569533Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=aed013f3-2214-4a14-84a3-83d282e4b1b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.461298681Z" level=info msg="Ran pod sandbox 149ec80d08a020c882b7ec98b97986a19acf9e4d05f817823d9aa230b0637a2c with infra container: kube-system/kindnet-nhmmf/POD" id=7da18bd0-98c3-4e3a-a755-7e25cc775509 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.463077996Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=bf97f7a4-f6d8-4295-a1a1-fce4346110f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.492033514Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=78b54d0e-1670-498d-9fc5-055efe629161 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.531249611Z" level=info msg="Creating container: kube-system/kindnet-nhmmf/kindnet-cni" id=7d67bced-875c-41ae-8772-1efc8c3b573b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.532300064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.565666687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.566754318Z" level=info msg="Ran pod sandbox c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29 with infra container: kube-system/kube-proxy-k574z/POD" id=aed013f3-2214-4a14-84a3-83d282e4b1b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.573072627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=78d6c802-7201-4d97-9a0e-d7fe4db21062 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.57499524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.577548726Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cea845e4-c845-4300-91c3-5a901cc92316 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.581017722Z" level=info msg="Creating container: kube-system/kube-proxy-k574z/kube-proxy" id=ae7f3fbc-12b7-4fd0-97af-b231d1eb2187 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.581158394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.615976513Z" level=info msg="Created container 5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee: kube-system/kindnet-nhmmf/kindnet-cni" id=7d67bced-875c-41ae-8772-1efc8c3b573b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.631379955Z" level=info msg="Starting container: 5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee" id=69e47671-b245-467d-a399-54a10a71aa43 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.644008999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.64854983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:19:59 newest-cni-058071 crio[614]: time="2025-11-23T11:19:59.64878058Z" level=info msg="Started container" PID=1065 containerID=5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee description=kube-system/kindnet-nhmmf/kindnet-cni id=69e47671-b245-467d-a399-54a10a71aa43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=149ec80d08a020c882b7ec98b97986a19acf9e4d05f817823d9aa230b0637a2c
	Nov 23 11:20:00 newest-cni-058071 crio[614]: time="2025-11-23T11:20:00.195423946Z" level=info msg="Created container 07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c: kube-system/kube-proxy-k574z/kube-proxy" id=ae7f3fbc-12b7-4fd0-97af-b231d1eb2187 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:20:00 newest-cni-058071 crio[614]: time="2025-11-23T11:20:00.196786579Z" level=info msg="Starting container: 07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c" id=62de9c5d-dd60-4fa0-b5d6-39ec5053e4c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:20:00 newest-cni-058071 crio[614]: time="2025-11-23T11:20:00.208916673Z" level=info msg="Started container" PID=1075 containerID=07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c description=kube-system/kube-proxy-k574z/kube-proxy id=62de9c5d-dd60-4fa0-b5d6-39ec5053e4c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	07540e9c5dbcc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   c79dc3cd9723d       kube-proxy-k574z                            kube-system
	5f85b9911c0ac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   149ec80d08a02       kindnet-nhmmf                               kube-system
	760f7a89b92dc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   eddb051c8a14d       kube-scheduler-newest-cni-058071            kube-system
	04cc7cb59b36d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   05865354e533c       kube-apiserver-newest-cni-058071            kube-system
	0666c2f1ccc45       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   1                   3c53b836c53e5       kube-controller-manager-newest-cni-058071   kube-system
	4290d47514723       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   76da4c951e631       etcd-newest-cni-058071                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-058071
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-058071
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=newest-cni-058071
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_19_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:19:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-058071
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:19:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 11:19:58 +0000   Sun, 23 Nov 2025 11:19:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-058071
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                50c4c8d6-c4e7-4ed0-b751-2e5f93061714
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-058071                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-nhmmf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-058071             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-058071    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-k574z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-058071             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-058071 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-058071 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-058071 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-058071 event: Registered Node newest-cni-058071 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-058071 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-058071 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-058071 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-058071 event: Registered Node newest-cni-058071 in Controller
	
	
	==> dmesg <==
	[Nov23 11:00] overlayfs: idmapped layers are currently not supported
	[ +49.395604] overlayfs: idmapped layers are currently not supported
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	[Nov23 11:19] overlayfs: idmapped layers are currently not supported
	[ +26.182636] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4290d47514723983c4826662bf23321356d253a3be39695fbdcadf5bbc8d9fde] <==
	{"level":"warn","ts":"2025-11-23T11:19:55.789011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.800457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.833125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.860581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.878206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.897595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.914424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.931594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.948986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.968466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:55.989511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.023700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.031282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.045875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.063289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.080770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.102690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.115205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.172335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.178313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.202811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.241683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.255031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.274714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:19:56.327918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:20:07 up  4:02,  0 user,  load average: 4.59, 3.76, 3.09
	Linux newest-cni-058071 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f85b9911c0ac7c44b87a8e8d8808f6627cba85fbcb3186ea846610042836dee] <==
	I1123 11:19:59.794037       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:19:59.794532       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 11:19:59.794646       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:19:59.794657       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:19:59.794668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:19:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:20:00.004054       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:20:00.004094       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:20:00.004108       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:20:00.004653       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [04cc7cb59b36d6840b17473f1a41a5430850e266ef355149cf235280388d1e18] <==
	I1123 11:19:58.150626       1 aggregator.go:171] initial CRD sync complete...
	I1123 11:19:58.150660       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 11:19:58.150669       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:19:58.150888       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 11:19:58.150897       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 11:19:58.155030       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:19:58.169571       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:19:58.170631       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:19:58.170893       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:19:58.221543       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:19:58.251346       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:19:58.255938       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 11:19:58.264698       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:19:58.419644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:19:59.166591       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:19:59.433245       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:19:59.832062       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:20:00.043080       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:20:00.174108       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:20:00.814354       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.81.196"}
	I1123 11:20:00.878119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.117.182"}
	I1123 11:20:03.056677       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:20:03.108981       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:20:03.482861       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:20:03.532084       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0666c2f1ccc456064af80c66ee9890fc736805f3940cafca3cffadb90fc5c2b6] <==
	I1123 11:20:02.980439       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 11:20:02.980740       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 11:20:02.980775       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 11:20:02.984888       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:20:02.992342       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 11:20:02.997638       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:20:02.998889       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:20:03.000148       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:03.002692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:03.002723       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:20:03.002732       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:20:03.006924       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 11:20:03.020378       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:20:03.022327       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:20:03.022444       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:20:03.022525       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-058071"
	I1123 11:20:03.022580       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 11:20:03.023747       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:20:03.023802       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 11:20:03.024883       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:20:03.024939       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:20:03.026802       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:20:03.037323       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:20:03.040973       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:20:03.049164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [07540e9c5dbcc707c6f6267b7a0a5a28183217815a3ede5679c33af54a36e13c] <==
	I1123 11:20:00.905641       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:20:01.004768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:20:01.107332       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:20:01.107740       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 11:20:01.107886       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:20:01.154917       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:20:01.154979       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:20:01.167822       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:20:01.168337       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:20:01.168359       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:20:01.170393       1 config.go:200] "Starting service config controller"
	I1123 11:20:01.170417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:20:01.170437       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:20:01.170441       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:20:01.170454       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:20:01.170458       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:20:01.171201       1 config.go:309] "Starting node config controller"
	I1123 11:20:01.171221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:20:01.171229       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:20:01.271450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:20:01.271500       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:20:01.271547       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [760f7a89b92dc0b3ad894caa5cdc86f98a98fddaa21f406ddf501404d70a950d] <==
	I1123 11:19:56.080488       1 serving.go:386] Generated self-signed cert in-memory
	I1123 11:20:00.651932       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 11:20:00.652058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:20:00.676187       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 11:20:00.676243       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 11:20:00.676281       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:20:00.676297       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:20:00.676322       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:20:00.676341       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 11:20:00.686884       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 11:20:00.694579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 11:20:00.792529       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 11:20:00.792776       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 11:20:00.805792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:19:56 newest-cni-058071 kubelet[736]: E1123 11:19:56.516064     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-058071\" not found" node="newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.097466     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301101     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301215     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301254     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.301695     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-058071\" already exists" pod="kube-system/etcd-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.301721     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.305443     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.336460     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-058071\" already exists" pod="kube-system/kube-apiserver-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.336497     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.405610     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-058071\" already exists" pod="kube-system/kube-controller-manager-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: I1123 11:19:58.405648     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-058071"
	Nov 23 11:19:58 newest-cni-058071 kubelet[736]: E1123 11:19:58.461694     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-058071\" already exists" pod="kube-system/kube-scheduler-newest-cni-058071"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.057074     736 apiserver.go:52] "Watching apiserver"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.099184     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144058     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-xtables-lock\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144145     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-xtables-lock\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144180     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-lib-modules\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144220     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d8ab6d1-c0c9-4f98-a624-cee178c49a77-lib-modules\") pod \"kube-proxy-k574z\" (UID: \"5d8ab6d1-c0c9-4f98-a624-cee178c49a77\") " pod="kube-system/kube-proxy-k574z"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.144248     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3a4984b0-33ea-41b8-bcf0-371db0376a23-cni-cfg\") pod \"kindnet-nhmmf\" (UID: \"3a4984b0-33ea-41b8-bcf0-371db0376a23\") " pod="kube-system/kindnet-nhmmf"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: I1123 11:19:59.211369     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:19:59 newest-cni-058071 kubelet[736]: W1123 11:19:59.544003     736 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/80b941940765e992f2660e1bbfe61392f0bcdef5df4e1ba2aa4e97b4be6f2559/crio-c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29 WatchSource:0}: Error finding container c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29: Status 404 returned error can't find the container with id c79dc3cd9723d834f60d65130df65095d7179f9d05a73f4e4726f886494e8f29
	Nov 23 11:20:02 newest-cni-058071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:20:02 newest-cni-058071 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:20:02 newest-cni-058071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-058071 -n newest-cni-058071
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-058071 -n newest-cni-058071: exit status 2 (368.450488ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-058071 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c: exit status 1 (84.582918ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-86c67" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hm74w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-48h4c" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-058071 describe pod coredns-66bc5c9577-86c67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hm74w kubernetes-dashboard-855c9754f9-48h4c: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-103096 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-103096 --alsologtostderr -v=1: exit status 80 (2.022346898s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-103096 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:21:34.724578  751095 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:21:34.724694  751095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:21:34.724703  751095 out.go:374] Setting ErrFile to fd 2...
	I1123 11:21:34.724708  751095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:21:34.724951  751095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:21:34.725186  751095 out.go:368] Setting JSON to false
	I1123 11:21:34.725210  751095 mustload.go:66] Loading cluster: default-k8s-diff-port-103096
	I1123 11:21:34.725679  751095 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:21:34.726146  751095 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:21:34.749973  751095 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:21:34.750320  751095 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:21:34.810196  751095 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 11:21:34.793235829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:21:34.811213  751095 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-103096 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 11:21:34.814672  751095 out.go:179] * Pausing node default-k8s-diff-port-103096 ... 
	I1123 11:21:34.817611  751095 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:21:34.817963  751095 ssh_runner.go:195] Run: systemctl --version
	I1123 11:21:34.818017  751095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:21:34.836094  751095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:21:34.944543  751095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:21:34.960227  751095 pause.go:52] kubelet running: true
	I1123 11:21:34.960294  751095 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:21:35.224590  751095 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:21:35.224697  751095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:21:35.300698  751095 cri.go:89] found id: "5af6f79168eea00838e2945ae540d3eaf1f76e899c71f27379162736cced60d4"
	I1123 11:21:35.300723  751095 cri.go:89] found id: "b339c5fa1ad36460e37650644bac4eb0d7e10ea479d6f995da3370cb86c53cef"
	I1123 11:21:35.300728  751095 cri.go:89] found id: "19086a27c9d0305f6aaed6b856a8c3465b3c5186f5220a276e23f82da308c4f6"
	I1123 11:21:35.300732  751095 cri.go:89] found id: "2fcda04eae0c435a3ecda39fde16360c7527d896df39314f18046cd3abfb3b0c"
	I1123 11:21:35.300735  751095 cri.go:89] found id: "cd47bb53c6c9409136a0de45f335cfa1b4ae0d245cb0ee6b78f4018bf100d946"
	I1123 11:21:35.300739  751095 cri.go:89] found id: "e28157e052afed9ccd76d9c030b94bdfeb8d4bd7f67616e87072d6a9e76a9d4f"
	I1123 11:21:35.300743  751095 cri.go:89] found id: "627d497d6c6c164273a91504576a3eddba3511129b63409f1c12576b1a90ac2f"
	I1123 11:21:35.300751  751095 cri.go:89] found id: "21dcb05b52237e1adb39fc6a3d6b76a54c5afd4e77d3efa5312cc8b77bb1d2f4"
	I1123 11:21:35.300754  751095 cri.go:89] found id: "005536dc4a08cc2e74db59ff3386adcf759f37c83808ec8e7525227e5627216e"
	I1123 11:21:35.300761  751095 cri.go:89] found id: "80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2"
	I1123 11:21:35.300764  751095 cri.go:89] found id: "511509d807681fad8dd77857c090e47e76497556036046e2c6c20640528a4c94"
	I1123 11:21:35.300767  751095 cri.go:89] found id: ""
	I1123 11:21:35.300816  751095 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:21:35.312762  751095 retry.go:31] will retry after 370.55275ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:21:35Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:21:35.684358  751095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:21:35.698650  751095 pause.go:52] kubelet running: false
	I1123 11:21:35.698726  751095 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:21:35.906636  751095 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:21:35.906721  751095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:21:35.989502  751095 cri.go:89] found id: "5af6f79168eea00838e2945ae540d3eaf1f76e899c71f27379162736cced60d4"
	I1123 11:21:35.989525  751095 cri.go:89] found id: "b339c5fa1ad36460e37650644bac4eb0d7e10ea479d6f995da3370cb86c53cef"
	I1123 11:21:35.989530  751095 cri.go:89] found id: "19086a27c9d0305f6aaed6b856a8c3465b3c5186f5220a276e23f82da308c4f6"
	I1123 11:21:35.989534  751095 cri.go:89] found id: "2fcda04eae0c435a3ecda39fde16360c7527d896df39314f18046cd3abfb3b0c"
	I1123 11:21:35.989537  751095 cri.go:89] found id: "cd47bb53c6c9409136a0de45f335cfa1b4ae0d245cb0ee6b78f4018bf100d946"
	I1123 11:21:35.989541  751095 cri.go:89] found id: "e28157e052afed9ccd76d9c030b94bdfeb8d4bd7f67616e87072d6a9e76a9d4f"
	I1123 11:21:35.989544  751095 cri.go:89] found id: "627d497d6c6c164273a91504576a3eddba3511129b63409f1c12576b1a90ac2f"
	I1123 11:21:35.989547  751095 cri.go:89] found id: "21dcb05b52237e1adb39fc6a3d6b76a54c5afd4e77d3efa5312cc8b77bb1d2f4"
	I1123 11:21:35.989568  751095 cri.go:89] found id: "005536dc4a08cc2e74db59ff3386adcf759f37c83808ec8e7525227e5627216e"
	I1123 11:21:35.989582  751095 cri.go:89] found id: "80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2"
	I1123 11:21:35.989587  751095 cri.go:89] found id: "511509d807681fad8dd77857c090e47e76497556036046e2c6c20640528a4c94"
	I1123 11:21:35.989590  751095 cri.go:89] found id: ""
	I1123 11:21:35.989650  751095 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:21:36.003947  751095 retry.go:31] will retry after 206.451329ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:21:35Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:21:36.211546  751095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:21:36.225053  751095 pause.go:52] kubelet running: false
	I1123 11:21:36.225127  751095 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 11:21:36.506396  751095 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 11:21:36.506492  751095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 11:21:36.643032  751095 cri.go:89] found id: "5af6f79168eea00838e2945ae540d3eaf1f76e899c71f27379162736cced60d4"
	I1123 11:21:36.643103  751095 cri.go:89] found id: "b339c5fa1ad36460e37650644bac4eb0d7e10ea479d6f995da3370cb86c53cef"
	I1123 11:21:36.643124  751095 cri.go:89] found id: "19086a27c9d0305f6aaed6b856a8c3465b3c5186f5220a276e23f82da308c4f6"
	I1123 11:21:36.643146  751095 cri.go:89] found id: "2fcda04eae0c435a3ecda39fde16360c7527d896df39314f18046cd3abfb3b0c"
	I1123 11:21:36.643165  751095 cri.go:89] found id: "cd47bb53c6c9409136a0de45f335cfa1b4ae0d245cb0ee6b78f4018bf100d946"
	I1123 11:21:36.643198  751095 cri.go:89] found id: "e28157e052afed9ccd76d9c030b94bdfeb8d4bd7f67616e87072d6a9e76a9d4f"
	I1123 11:21:36.643224  751095 cri.go:89] found id: "627d497d6c6c164273a91504576a3eddba3511129b63409f1c12576b1a90ac2f"
	I1123 11:21:36.643255  751095 cri.go:89] found id: "21dcb05b52237e1adb39fc6a3d6b76a54c5afd4e77d3efa5312cc8b77bb1d2f4"
	I1123 11:21:36.643270  751095 cri.go:89] found id: "005536dc4a08cc2e74db59ff3386adcf759f37c83808ec8e7525227e5627216e"
	I1123 11:21:36.643291  751095 cri.go:89] found id: "80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2"
	I1123 11:21:36.643311  751095 cri.go:89] found id: "511509d807681fad8dd77857c090e47e76497556036046e2c6c20640528a4c94"
	I1123 11:21:36.643343  751095 cri.go:89] found id: ""
	I1123 11:21:36.643435  751095 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 11:21:36.662517  751095 out.go:203] 
	W1123 11:21:36.665864  751095 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:21:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:21:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 11:21:36.665893  751095 out.go:285] * 
	* 
	W1123 11:21:36.676378  751095 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 11:21:36.679705  751095 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-103096 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-103096
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-103096:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0",
	        "Created": "2025-11-23T11:18:31.407055739Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 747246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:20:16.990896679Z",
	            "FinishedAt": "2025-11-23T11:20:13.379322144Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/hosts",
	        "LogPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0-json.log",
	        "Name": "/default-k8s-diff-port-103096",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-103096:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-103096",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0",
	                "LowerDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103096",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103096/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103096",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103096",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103096",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a73ab855e40ff26e4a27df91e2c4f1d2a8cd2644b47f63c1633e1e08a3f9aea",
	            "SandboxKey": "/var/run/docker/netns/6a73ab855e40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-103096": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:31:1d:bd:f6:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e03847072cf28dc18f7a1d9d48fec693250a4b2bc18a1175017d251775e454c9",
	                    "EndpointID": "4808ebf5eff775a14c532e917ba07536444246523161a154890caaab03070511",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103096",
	                        "ea90e0e4e065"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096: exit status 2 (472.482419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103096 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-103096 logs -n 25: (1.971040953s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p newest-cni-058071 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-058071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103096 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ image   │ newest-cni-058071 image list --format=json                                                                                                                                                                                                    │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ pause   │ -p newest-cni-058071 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │                     │
	│ delete  │ -p newest-cni-058071                                                                                                                                                                                                                          │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ delete  │ -p newest-cni-058071                                                                                                                                                                                                                          │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ start   │ -p auto-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-344709                  │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:21 UTC │
	│ ssh     │ -p auto-344709 pgrep -a kubelet                                                                                                                                                                                                               │ auto-344709                  │ jenkins │ v1.37.0 │ 23 Nov 25 11:21 UTC │ 23 Nov 25 11:21 UTC │
	│ image   │ default-k8s-diff-port-103096 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:21 UTC │ 23 Nov 25 11:21 UTC │
	│ pause   │ -p default-k8s-diff-port-103096 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:20:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:20:16.497843  746758 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:20:16.498494  746758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:20:16.498530  746758 out.go:374] Setting ErrFile to fd 2...
	I1123 11:20:16.498550  746758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:20:16.498851  746758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:20:16.499343  746758 out.go:368] Setting JSON to false
	I1123 11:20:16.500284  746758 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14565,"bootTime":1763882251,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:20:16.500477  746758 start.go:143] virtualization:  
	I1123 11:20:16.504370  746758 out.go:179] * [default-k8s-diff-port-103096] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:20:16.507397  746758 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:20:16.507458  746758 notify.go:221] Checking for updates...
	I1123 11:20:16.512959  746758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:20:16.515819  746758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:16.518627  746758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:20:16.521372  746758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:20:16.524228  746758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:20:16.527568  746758 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:16.528254  746758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:20:16.571153  746758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:20:16.571271  746758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:20:16.687672  746758 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 11:20:16.675188502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:20:16.687766  746758 docker.go:319] overlay module found
	I1123 11:20:16.691144  746758 out.go:179] * Using the docker driver based on existing profile
	I1123 11:20:16.694162  746758 start.go:309] selected driver: docker
	I1123 11:20:16.694192  746758 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:16.694303  746758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:20:16.694957  746758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:20:16.847276  746758 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 11:20:16.833808294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:20:16.848255  746758 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:20:16.848433  746758 cni.go:84] Creating CNI manager for ""
	I1123 11:20:16.848508  746758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:16.848804  746758 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:16.852320  746758 out.go:179] * Starting "default-k8s-diff-port-103096" primary control-plane node in "default-k8s-diff-port-103096" cluster
	I1123 11:20:16.855398  746758 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:20:16.858482  746758 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:20:16.861359  746758 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:20:16.861447  746758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:20:16.861467  746758 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:20:16.861492  746758 cache.go:65] Caching tarball of preloaded images
	I1123 11:20:16.861580  746758 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:20:16.861589  746758 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:20:16.861699  746758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:20:16.894438  746758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:20:16.894458  746758 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:20:16.894474  746758 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:20:16.894504  746758 start.go:360] acquireMachinesLock for default-k8s-diff-port-103096: {Name:mk974e47f06d6cbaa10109a8c2801bcc82e17d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:20:16.894559  746758 start.go:364] duration metric: took 33.116µs to acquireMachinesLock for "default-k8s-diff-port-103096"
	I1123 11:20:16.894577  746758 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:20:16.894583  746758 fix.go:54] fixHost starting: 
	I1123 11:20:16.894855  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:16.934672  746758 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103096: state=Stopped err=<nil>
	W1123 11:20:16.934705  746758 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 11:20:16.119190  746221 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-344709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.110565778s)
	I1123 11:20:16.119225  746221 kic.go:203] duration metric: took 4.110729039s to extract preloaded images to volume ...
	W1123 11:20:16.119369  746221 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:20:16.119480  746221 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:20:16.188567  746221 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-344709 --name auto-344709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-344709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-344709 --network auto-344709 --ip 192.168.76.2 --volume auto-344709:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:20:16.570837  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Running}}
	I1123 11:20:16.636054  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:16.699510  746221 cli_runner.go:164] Run: docker exec auto-344709 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:20:16.783691  746221 oci.go:144] the created container "auto-344709" has a running status.
	I1123 11:20:16.783721  746221 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa...
	I1123 11:20:16.925330  746221 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:20:16.954270  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:17.013585  746221 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:20:17.013610  746221 kic_runner.go:114] Args: [docker exec --privileged auto-344709 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:20:17.112413  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:17.142386  746221 machine.go:94] provisionDockerMachine start ...
	I1123 11:20:17.142487  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:17.171951  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:17.172363  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:17.172375  746221 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:20:17.174585  746221 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:20:20.329194  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-344709
	
	I1123 11:20:20.329219  746221 ubuntu.go:182] provisioning hostname "auto-344709"
	I1123 11:20:20.329282  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:20.346948  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.347268  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:20.347284  746221 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-344709 && echo "auto-344709" | sudo tee /etc/hostname
	I1123 11:20:20.516093  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-344709
	
	I1123 11:20:20.516176  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:20.536028  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.536358  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:20.536380  746221 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-344709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-344709/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-344709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:20:20.693794  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:20:20.693823  746221 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:20:20.693844  746221 ubuntu.go:190] setting up certificates
	I1123 11:20:20.693854  746221 provision.go:84] configureAuth start
	I1123 11:20:20.693912  746221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-344709
	I1123 11:20:20.713164  746221 provision.go:143] copyHostCerts
	I1123 11:20:20.713243  746221 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:20:20.713252  746221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:20:20.713338  746221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:20:20.713464  746221 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:20:20.713471  746221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:20:20.713508  746221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:20:20.713568  746221 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:20:20.713579  746221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:20:20.713605  746221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:20:20.713662  746221 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.auto-344709 san=[127.0.0.1 192.168.76.2 auto-344709 localhost minikube]
	I1123 11:20:20.860627  746221 provision.go:177] copyRemoteCerts
	I1123 11:20:20.860736  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:20:20.860831  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:20.878234  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:20.995037  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:20:21.020610  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1123 11:20:21.042891  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:20:21.063250  746221 provision.go:87] duration metric: took 369.376307ms to configureAuth
	I1123 11:20:21.063286  746221 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:20:21.063470  746221 config.go:182] Loaded profile config "auto-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:21.063567  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.082396  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:21.082707  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:21.082721  746221 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:20:16.938641  746758 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103096" ...
	I1123 11:20:16.938728  746758 cli_runner.go:164] Run: docker start default-k8s-diff-port-103096
	I1123 11:20:17.348125  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:17.371951  746758 kic.go:430] container "default-k8s-diff-port-103096" state is running.
	I1123 11:20:17.372389  746758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:20:17.403082  746758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:20:17.403327  746758 machine.go:94] provisionDockerMachine start ...
	I1123 11:20:17.403389  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:17.441219  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:17.441693  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:17.441706  746758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:20:17.442388  746758 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43746->127.0.0.1:33842: read: connection reset by peer
	I1123 11:20:20.597940  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:20:20.597973  746758 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103096"
	I1123 11:20:20.598073  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:20.625784  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.626185  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:20.626202  746758 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103096 && echo "default-k8s-diff-port-103096" | sudo tee /etc/hostname
	I1123 11:20:20.800551  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:20:20.800628  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:20.822444  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.822748  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:20.822771  746758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:20:20.990314  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:20:20.990345  746758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:20:20.990381  746758 ubuntu.go:190] setting up certificates
	I1123 11:20:20.990392  746758 provision.go:84] configureAuth start
	I1123 11:20:20.990460  746758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:20:21.015033  746758 provision.go:143] copyHostCerts
	I1123 11:20:21.015107  746758 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:20:21.015124  746758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:20:21.015184  746758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:20:21.015306  746758 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:20:21.015318  746758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:20:21.015341  746758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:20:21.015413  746758 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:20:21.015424  746758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:20:21.015450  746758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:20:21.015550  746758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103096 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103096 localhost minikube]
	I1123 11:20:21.382219  746758 provision.go:177] copyRemoteCerts
	I1123 11:20:21.382303  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:20:21.382363  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:21.408792  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:21.404458  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:20:21.404484  746221 machine.go:97] duration metric: took 4.262079044s to provisionDockerMachine
	I1123 11:20:21.404495  746221 client.go:176] duration metric: took 10.066657192s to LocalClient.Create
	I1123 11:20:21.404516  746221 start.go:167] duration metric: took 10.066720727s to libmachine.API.Create "auto-344709"
	I1123 11:20:21.404523  746221 start.go:293] postStartSetup for "auto-344709" (driver="docker")
	I1123 11:20:21.404533  746221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:20:21.404613  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:20:21.404656  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.427253  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.537633  746221 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:20:21.541885  746221 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:20:21.541960  746221 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:20:21.541985  746221 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:20:21.542079  746221 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:20:21.542212  746221 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:20:21.542370  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:20:21.552039  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:21.574945  746221 start.go:296] duration metric: took 170.408257ms for postStartSetup
	I1123 11:20:21.575426  746221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-344709
	I1123 11:20:21.596878  746221 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/config.json ...
	I1123 11:20:21.597164  746221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:20:21.597214  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.622921  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.734286  746221 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:20:21.738979  746221 start.go:128] duration metric: took 10.404719165s to createHost
	I1123 11:20:21.739006  746221 start.go:83] releasing machines lock for "auto-344709", held for 10.404847948s
	I1123 11:20:21.739076  746221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-344709
	I1123 11:20:21.755812  746221 ssh_runner.go:195] Run: cat /version.json
	I1123 11:20:21.755877  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.756158  746221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:20:21.756216  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.783463  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.788217  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.901570  746221 ssh_runner.go:195] Run: systemctl --version
	I1123 11:20:21.999125  746221 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:20:22.049998  746221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:20:22.065738  746221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:20:22.065815  746221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:20:22.112851  746221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 11:20:22.112876  746221 start.go:496] detecting cgroup driver to use...
	I1123 11:20:22.112948  746221 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:20:22.113057  746221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:20:22.136375  746221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:20:22.152133  746221 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:20:22.152241  746221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:20:22.174160  746221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:20:22.202293  746221 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:20:22.356730  746221 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:20:22.581300  746221 docker.go:234] disabling docker service ...
	I1123 11:20:22.581380  746221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:20:22.612026  746221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:20:22.627571  746221 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:20:22.757477  746221 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:20:22.912600  746221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:20:22.936283  746221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:20:22.950570  746221 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:20:22.950642  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.959489  746221 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:20:22.959557  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.968313  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.977452  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.986298  746221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:20:22.994476  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.004182  746221 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.021500  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.031430  746221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:20:23.039777  746221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:20:23.055806  746221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:23.202990  746221 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:20:23.410027  746221 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:20:23.410140  746221 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:20:23.417297  746221 start.go:564] Will wait 60s for crictl version
	I1123 11:20:23.417463  746221 ssh_runner.go:195] Run: which crictl
	I1123 11:20:23.422036  746221 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:20:23.457021  746221 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:20:23.457133  746221 ssh_runner.go:195] Run: crio --version
	I1123 11:20:23.495798  746221 ssh_runner.go:195] Run: crio --version
	I1123 11:20:23.537069  746221 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:20:21.527331  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:20:21.548879  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 11:20:21.570394  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:20:21.593505  746758 provision.go:87] duration metric: took 603.086809ms to configureAuth
	I1123 11:20:21.593528  746758 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:20:21.593724  746758 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:21.593824  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:21.621647  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:21.622008  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:21.622022  746758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:20:22.052447  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:20:22.052477  746758 machine.go:97] duration metric: took 4.649139659s to provisionDockerMachine
	I1123 11:20:22.052488  746758 start.go:293] postStartSetup for "default-k8s-diff-port-103096" (driver="docker")
	I1123 11:20:22.052499  746758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:20:22.052559  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:20:22.052632  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.077269  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.190177  746758 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:20:22.194346  746758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:20:22.194374  746758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:20:22.194385  746758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:20:22.194437  746758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:20:22.194517  746758 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:20:22.194613  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:20:22.204128  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:22.225928  746758 start.go:296] duration metric: took 173.424018ms for postStartSetup
	I1123 11:20:22.226061  746758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:20:22.226130  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.252332  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.370828  746758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:20:22.379168  746758 fix.go:56] duration metric: took 5.484577976s for fixHost
	I1123 11:20:22.379191  746758 start.go:83] releasing machines lock for "default-k8s-diff-port-103096", held for 5.484623588s
	I1123 11:20:22.379260  746758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:20:22.400310  746758 ssh_runner.go:195] Run: cat /version.json
	I1123 11:20:22.400377  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.400310  746758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:20:22.400518  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.422933  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.445054  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.566796  746758 ssh_runner.go:195] Run: systemctl --version
	I1123 11:20:22.678914  746758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:20:22.740792  746758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:20:22.746833  746758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:20:22.746916  746758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:20:22.760456  746758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:20:22.760490  746758 start.go:496] detecting cgroup driver to use...
	I1123 11:20:22.760522  746758 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:20:22.760584  746758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:20:22.779526  746758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:20:22.801632  746758 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:20:22.801753  746758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:20:22.823760  746758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:20:22.843217  746758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:20:23.004851  746758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:20:23.174557  746758 docker.go:234] disabling docker service ...
	I1123 11:20:23.174687  746758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:20:23.190659  746758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:20:23.206494  746758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:20:23.342795  746758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:20:23.477178  746758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:20:23.492501  746758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:20:23.510089  746758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:20:23.510188  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.525938  746758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:20:23.526087  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.538523  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.550553  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.565144  746758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:20:23.575206  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.588520  746758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.597915  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.610608  746758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:20:23.622228  746758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:20:23.630790  746758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:23.779949  746758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:20:23.972748  746758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:20:23.972816  746758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:20:23.977861  746758 start.go:564] Will wait 60s for crictl version
	I1123 11:20:23.977946  746758 ssh_runner.go:195] Run: which crictl
	I1123 11:20:23.982119  746758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:20:24.009725  746758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:20:24.009824  746758 ssh_runner.go:195] Run: crio --version
	I1123 11:20:24.057654  746758 ssh_runner.go:195] Run: crio --version
	I1123 11:20:24.129177  746758 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:20:23.540195  746221 cli_runner.go:164] Run: docker network inspect auto-344709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:20:23.561592  746221 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:20:23.565645  746221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:23.578350  746221 kubeadm.go:884] updating cluster {Name:auto-344709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-344709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:20:23.578483  746221 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:20:23.578534  746221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:23.622307  746221 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:23.622330  746221 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:20:23.622375  746221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:23.654470  746221 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:23.654494  746221 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:20:23.654502  746221 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:20:23.654642  746221 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-344709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-344709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:20:23.654757  746221 ssh_runner.go:195] Run: crio config
	I1123 11:20:23.758290  746221 cni.go:84] Creating CNI manager for ""
	I1123 11:20:23.758314  746221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:23.758354  746221 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:20:23.758385  746221 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-344709 NodeName:auto-344709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:20:23.758562  746221 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-344709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:20:23.758663  746221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:20:23.768357  746221 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:20:23.768475  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:20:23.777191  746221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1123 11:20:23.795693  746221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:20:23.810916  746221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 11:20:23.824125  746221 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:20:23.831042  746221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:23.842702  746221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:23.999167  746221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:24.020799  746221 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709 for IP: 192.168.76.2
	I1123 11:20:24.020820  746221 certs.go:195] generating shared ca certs ...
	I1123 11:20:24.020838  746221 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.021057  746221 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:20:24.021144  746221 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:20:24.021159  746221 certs.go:257] generating profile certs ...
	I1123 11:20:24.021239  746221 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.key
	I1123 11:20:24.021272  746221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt with IP's: []
	I1123 11:20:24.097233  746221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt ...
	I1123 11:20:24.097314  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: {Name:mk39ab0ede81a5b2b03a844fd50c733613ac9e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.097568  746221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.key ...
	I1123 11:20:24.097605  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.key: {Name:mk6ca533ab3ba1c63213a62d24d4f9358494d664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.097760  746221 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2
	I1123 11:20:24.097803  746221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 11:20:24.324625  746221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2 ...
	I1123 11:20:24.324662  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2: {Name:mkce87ccbd35e1c44be5c3f308eb874644b859a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.324920  746221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2 ...
	I1123 11:20:24.324939  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2: {Name:mk5d4e2d27389e157dd9d9eddcde3753ba1f3679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.325077  746221 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt
	I1123 11:20:24.325190  746221 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key
	I1123 11:20:24.325273  746221 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key
	I1123 11:20:24.325290  746221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt with IP's: []
	I1123 11:20:24.404859  746221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt ...
	I1123 11:20:24.404892  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt: {Name:mk3833ef5569939c10850347236256b52a1378b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.405098  746221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key ...
	I1123 11:20:24.405113  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key: {Name:mk83de226f433f059707d3cc287ca8e81b308213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.405326  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:20:24.405372  746221 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:20:24.405389  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:20:24.405432  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:20:24.405461  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:20:24.405488  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:20:24.405533  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:24.406143  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:20:24.424624  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:20:24.444000  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:20:24.469906  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:20:24.498592  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1123 11:20:24.523723  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:20:24.544661  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:20:24.565342  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:20:24.592863  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:20:24.621872  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:20:24.651393  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:20:24.682541  746221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:20:24.699028  746221 ssh_runner.go:195] Run: openssl version
	I1123 11:20:24.706193  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:20:24.716286  746221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:20:24.722171  746221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:20:24.722239  746221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:20:24.793362  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:20:24.807184  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:20:24.816269  746221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:20:24.823435  746221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:20:24.823578  746221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:20:24.902372  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:20:24.924353  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:20:24.938619  746221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:24.943410  746221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:24.943475  746221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.001067  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:20:25.017576  746221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:20:25.026094  746221 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:20:25.026157  746221 kubeadm.go:401] StartCluster: {Name:auto-344709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-344709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:25.026243  746221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:20:25.026317  746221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:20:25.060397  746221 cri.go:89] found id: ""
	I1123 11:20:25.060480  746221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:20:25.077343  746221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:20:25.087395  746221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:20:25.087466  746221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:20:25.100450  746221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:20:25.100476  746221 kubeadm.go:158] found existing configuration files:
	
	I1123 11:20:25.100544  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 11:20:25.111991  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:20:25.112065  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:20:25.122108  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 11:20:25.133866  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:20:25.133936  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:20:25.144658  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 11:20:25.155993  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:20:25.156082  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:20:25.165989  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 11:20:25.179168  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:20:25.179236  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:20:25.188460  746221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:20:25.245795  746221 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 11:20:25.246165  746221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:20:25.288671  746221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:20:25.288749  746221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:20:25.288788  746221 kubeadm.go:319] OS: Linux
	I1123 11:20:25.288838  746221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:20:25.288891  746221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:20:25.288943  746221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:20:25.288994  746221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:20:25.289045  746221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:20:25.289103  746221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:20:25.289152  746221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:20:25.289204  746221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:20:25.289255  746221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:20:25.379620  746221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:20:25.379737  746221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:20:25.379834  746221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 11:20:25.394000  746221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:20:24.132108  746758 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:20:24.157330  746758 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:20:24.161152  746758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:24.171208  746758 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:20:24.171358  746758 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:20:24.171407  746758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:24.206800  746758 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:24.206819  746758 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:20:24.206876  746758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:24.239696  746758 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:24.239761  746758 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:20:24.239786  746758 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 11:20:24.239924  746758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:20:24.240040  746758 ssh_runner.go:195] Run: crio config
	I1123 11:20:24.331310  746758 cni.go:84] Creating CNI manager for ""
	I1123 11:20:24.331381  746758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:24.331413  746758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:20:24.331464  746758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103096 NodeName:default-k8s-diff-port-103096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:20:24.331632  746758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103096"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:20:24.331721  746758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:20:24.339774  746758 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:20:24.339891  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:20:24.347840  746758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 11:20:24.363318  746758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:20:24.376008  746758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1123 11:20:24.388754  746758 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:20:24.392844  746758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:24.402434  746758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:24.566714  746758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:24.587023  746758 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096 for IP: 192.168.85.2
	I1123 11:20:24.587103  746758 certs.go:195] generating shared ca certs ...
	I1123 11:20:24.587135  746758 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.587329  746758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:20:24.587416  746758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:20:24.587451  746758 certs.go:257] generating profile certs ...
	I1123 11:20:24.587594  746758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key
	I1123 11:20:24.587707  746758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d
	I1123 11:20:24.587780  746758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key
	I1123 11:20:24.587929  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:20:24.587984  746758 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:20:24.588007  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:20:24.588073  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:20:24.588130  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:20:24.588195  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:20:24.588275  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:24.588906  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:20:24.639355  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:20:24.682784  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:20:24.711697  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:20:24.759751  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 11:20:24.817101  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:20:24.855934  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:20:24.896935  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:20:24.943187  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:20:24.974831  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:20:24.996714  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:20:25.024422  746758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:20:25.044992  746758 ssh_runner.go:195] Run: openssl version
	I1123 11:20:25.054285  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:20:25.066575  746758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:20:25.072167  746758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:20:25.072288  746758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:20:25.126130  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:20:25.135861  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:20:25.146015  746758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.151037  746758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.151181  746758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.196830  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:20:25.205816  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:20:25.215430  746758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:20:25.220210  746758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:20:25.220335  746758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:20:25.263107  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:20:25.271691  746758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:20:25.276846  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:20:25.320199  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:20:25.419219  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:20:25.488368  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:20:25.556288  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:20:25.629286  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:20:25.743904  746758 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:25.744004  746758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:20:25.744100  746758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:20:25.841547  746758 cri.go:89] found id: "e28157e052afed9ccd76d9c030b94bdfeb8d4bd7f67616e87072d6a9e76a9d4f"
	I1123 11:20:25.841571  746758 cri.go:89] found id: "627d497d6c6c164273a91504576a3eddba3511129b63409f1c12576b1a90ac2f"
	I1123 11:20:25.841595  746758 cri.go:89] found id: "21dcb05b52237e1adb39fc6a3d6b76a54c5afd4e77d3efa5312cc8b77bb1d2f4"
	I1123 11:20:25.841599  746758 cri.go:89] found id: "005536dc4a08cc2e74db59ff3386adcf759f37c83808ec8e7525227e5627216e"
	I1123 11:20:25.841603  746758 cri.go:89] found id: ""
	I1123 11:20:25.841658  746758 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:20:25.863264  746758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:20:25Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:20:25.863360  746758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:20:25.882260  746758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:20:25.882296  746758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:20:25.882357  746758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:20:25.937657  746758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:20:25.938153  746758 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103096" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:25.938277  746758 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103096" cluster setting kubeconfig missing "default-k8s-diff-port-103096" context setting]
	I1123 11:20:25.938618  746758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:25.946357  746758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:20:25.972619  746758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 11:20:25.972703  746758 kubeadm.go:602] duration metric: took 90.399699ms to restartPrimaryControlPlane
	I1123 11:20:25.972727  746758 kubeadm.go:403] duration metric: took 228.83321ms to StartCluster
	I1123 11:20:25.972768  746758 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:25.972869  746758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:25.973609  746758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:25.973895  746758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:20:25.974150  746758 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:25.974202  746758 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:20:25.974288  746758 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103096"
	I1123 11:20:25.974322  746758 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103096"
	W1123 11:20:25.974345  746758 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:20:25.974368  746758 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:20:25.974618  746758 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103096"
	I1123 11:20:25.974679  746758 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103096"
	W1123 11:20:25.974700  746758 addons.go:248] addon dashboard should already be in state true
	I1123 11:20:25.974761  746758 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:20:25.974917  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:25.975610  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:25.975865  746758 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103096"
	I1123 11:20:25.975887  746758 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103096"
	I1123 11:20:25.976163  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:25.980183  746758 out.go:179] * Verifying Kubernetes components...
	I1123 11:20:25.983406  746758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:26.026361  746758 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:20:26.029336  746758 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:26.029360  746758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:20:26.029548  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:26.033981  746758 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103096"
	W1123 11:20:26.034000  746758 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:20:26.034038  746758 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:20:26.034462  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:26.039173  746758 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:20:26.042177  746758 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:20:25.400644  746221 out.go:252]   - Generating certificates and keys ...
	I1123 11:20:25.400743  746221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:20:25.400818  746221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 11:20:25.585202  746221 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:20:26.033432  746221 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:20:26.045103  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:20:26.045129  746758 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:20:26.045200  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:26.081744  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:26.087741  746758 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:26.087762  746758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:20:26.087825  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:26.114885  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:26.129024  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:26.369909  746758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:26.396069  746758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:26.397478  746758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:26.276788  746221 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:20:26.706092  746221 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:20:27.946221  746221 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:20:27.946766  746221 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-344709 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:20:28.369885  746221 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:20:28.370430  746221 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-344709 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:20:28.789767  746221 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:20:29.688319  746221 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:20:29.899422  746221 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:20:29.899957  746221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:20:30.867968  746221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:20:30.996395  746221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 11:20:26.746244  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:20:26.746270  746758 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:20:26.775134  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:20:26.775210  746758 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:20:26.840104  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:20:26.840124  746758 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:20:26.873874  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:20:26.873893  746758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:20:26.903134  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:20:26.903155  746758 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:20:26.967991  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:20:26.968012  746758 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:20:27.022430  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:20:27.022494  746758 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:20:27.069718  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:20:27.069793  746758 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:20:27.135267  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:20:27.135349  746758 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:20:27.189470  746758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:20:31.570759  746221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:20:32.110914  746221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:20:32.474132  746221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:20:32.475209  746221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:20:32.481008  746221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 11:20:32.484427  746221 out.go:252]   - Booting up control plane ...
	I1123 11:20:32.484546  746221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:20:32.484639  746221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:20:32.484717  746221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:20:32.504215  746221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:20:32.504333  746221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 11:20:32.517902  746221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 11:20:32.519425  746221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:20:32.519481  746221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:20:32.766778  746221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 11:20:32.766908  746221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 11:20:33.765769  746221 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00141566s
	I1123 11:20:33.765884  746221 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 11:20:33.765973  746221 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 11:20:33.766067  746221 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 11:20:33.766148  746221 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 11:20:39.973923  746758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.603932794s)
	I1123 11:20:40.368625  746758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.971071875s)
	I1123 11:20:40.368965  746758 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (13.972832604s)
	I1123 11:20:40.368988  746758 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:20:40.419495  746758 node_ready.go:49] node "default-k8s-diff-port-103096" is "Ready"
	I1123 11:20:40.419523  746758 node_ready.go:38] duration metric: took 50.523818ms for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:20:40.419539  746758 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:20:40.419598  746758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:20:40.748087  746758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.558504893s)
	I1123 11:20:40.748253  746758 api_server.go:72] duration metric: took 14.774300177s to wait for apiserver process to appear ...
	I1123 11:20:40.748270  746758 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:20:40.748289  746758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:20:40.751008  746758 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-103096 addons enable metrics-server
	
	I1123 11:20:40.753841  746758 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 11:20:39.855869  746221 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.088878956s
	I1123 11:20:40.756559  746758 addons.go:530] duration metric: took 14.782352264s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 11:20:40.763922  746758 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:20:40.763962  746758 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:41.248484  746758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:20:41.256815  746758 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 11:20:41.257901  746758 api_server.go:141] control plane version: v1.34.1
	I1123 11:20:41.257960  746758 api_server.go:131] duration metric: took 509.679384ms to wait for apiserver health ...
	I1123 11:20:41.257984  746758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:20:41.262901  746758 system_pods.go:59] 8 kube-system pods found
	I1123 11:20:41.262990  746758 system_pods.go:61] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:20:41.263016  746758 system_pods.go:61] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:20:41.263053  746758 system_pods.go:61] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:20:41.263080  746758 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:20:41.263102  746758 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:20:41.263139  746758 system_pods.go:61] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:20:41.263166  746758 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:20:41.263185  746758 system_pods.go:61] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Running
	I1123 11:20:41.263226  746758 system_pods.go:74] duration metric: took 5.20879ms to wait for pod list to return data ...
	I1123 11:20:41.263254  746758 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:20:41.266257  746758 default_sa.go:45] found service account: "default"
	I1123 11:20:41.266318  746758 default_sa.go:55] duration metric: took 3.042185ms for default service account to be created ...
	I1123 11:20:41.266371  746758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:20:41.275250  746758 system_pods.go:86] 8 kube-system pods found
	I1123 11:20:41.275281  746758 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:20:41.275292  746758 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:20:41.275300  746758 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:20:41.275308  746758 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:20:41.275312  746758 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:20:41.275317  746758 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:20:41.275323  746758 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:20:41.275327  746758 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Running
	I1123 11:20:41.275334  746758 system_pods.go:126] duration metric: took 8.938351ms to wait for k8s-apps to be running ...
	I1123 11:20:41.275341  746758 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:20:41.275396  746758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:20:41.297300  746758 system_svc.go:56] duration metric: took 21.949227ms WaitForService to wait for kubelet
	I1123 11:20:41.297328  746758 kubeadm.go:587] duration metric: took 15.323374242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:20:41.297346  746758 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:20:41.300526  746758 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:20:41.300558  746758 node_conditions.go:123] node cpu capacity is 2
	I1123 11:20:41.300570  746758 node_conditions.go:105] duration metric: took 3.219658ms to run NodePressure ...
	I1123 11:20:41.300583  746758 start.go:242] waiting for startup goroutines ...
	I1123 11:20:41.300590  746758 start.go:247] waiting for cluster config update ...
	I1123 11:20:41.300601  746758 start.go:256] writing updated cluster config ...
	I1123 11:20:41.300881  746758 ssh_runner.go:195] Run: rm -f paused
	I1123 11:20:41.304871  746758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:20:41.308706  746758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:20:41.571665  746221 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.805952544s
	I1123 11:20:43.769124  746221 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002468051s
	I1123 11:20:43.791441  746221 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 11:20:43.814184  746221 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 11:20:43.831244  746221 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 11:20:43.831456  746221 kubeadm.go:319] [mark-control-plane] Marking the node auto-344709 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 11:20:43.844488  746221 kubeadm.go:319] [bootstrap-token] Using token: t0aoo6.ojfbev4u7cauvp1h
	I1123 11:20:43.847497  746221 out.go:252]   - Configuring RBAC rules ...
	I1123 11:20:43.847685  746221 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 11:20:43.853328  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 11:20:43.861572  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 11:20:43.866062  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 11:20:43.873351  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 11:20:43.878428  746221 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 11:20:44.176754  746221 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 11:20:44.843409  746221 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 11:20:45.181358  746221 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 11:20:45.183110  746221 kubeadm.go:319] 
	I1123 11:20:45.183187  746221 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 11:20:45.183194  746221 kubeadm.go:319] 
	I1123 11:20:45.183273  746221 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 11:20:45.183278  746221 kubeadm.go:319] 
	I1123 11:20:45.183303  746221 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 11:20:45.183923  746221 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 11:20:45.183984  746221 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 11:20:45.183989  746221 kubeadm.go:319] 
	I1123 11:20:45.184044  746221 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 11:20:45.184048  746221 kubeadm.go:319] 
	I1123 11:20:45.184096  746221 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 11:20:45.184100  746221 kubeadm.go:319] 
	I1123 11:20:45.184152  746221 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 11:20:45.184228  746221 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 11:20:45.184297  746221 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 11:20:45.184301  746221 kubeadm.go:319] 
	I1123 11:20:45.187287  746221 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 11:20:45.187430  746221 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 11:20:45.187469  746221 kubeadm.go:319] 
	I1123 11:20:45.188577  746221 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t0aoo6.ojfbev4u7cauvp1h \
	I1123 11:20:45.188706  746221 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 11:20:45.189291  746221 kubeadm.go:319] 	--control-plane 
	I1123 11:20:45.189313  746221 kubeadm.go:319] 
	I1123 11:20:45.189763  746221 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 11:20:45.189776  746221 kubeadm.go:319] 
	I1123 11:20:45.190135  746221 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t0aoo6.ojfbev4u7cauvp1h \
	I1123 11:20:45.190473  746221 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 11:20:45.212056  746221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 11:20:45.212293  746221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 11:20:45.212451  746221 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 11:20:45.212471  746221 cni.go:84] Creating CNI manager for ""
	I1123 11:20:45.212479  746221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:45.217759  746221 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 11:20:45.221135  746221 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 11:20:45.235149  746221 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 11:20:45.235172  746221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 11:20:45.264867  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 11:20:45.738332  746221 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 11:20:45.738455  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:45.738522  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-344709 minikube.k8s.io/updated_at=2025_11_23T11_20_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=auto-344709 minikube.k8s.io/primary=true
	W1123 11:20:43.314594  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:45.316726  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	I1123 11:20:46.275231  746221 ops.go:34] apiserver oom_adj: -16
	I1123 11:20:46.275344  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:46.776376  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:47.275458  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:47.775745  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:48.276251  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:48.775451  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:48.973258  746221 kubeadm.go:1114] duration metric: took 3.234847916s to wait for elevateKubeSystemPrivileges
	I1123 11:20:48.973289  746221 kubeadm.go:403] duration metric: took 23.947136784s to StartCluster
	I1123 11:20:48.973310  746221 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:48.973373  746221 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:48.974349  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:48.974567  746221 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:20:48.974684  746221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 11:20:48.974927  746221 config.go:182] Loaded profile config "auto-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:48.974937  746221 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:20:48.975011  746221 addons.go:70] Setting storage-provisioner=true in profile "auto-344709"
	I1123 11:20:48.975025  746221 addons.go:239] Setting addon storage-provisioner=true in "auto-344709"
	I1123 11:20:48.975048  746221 host.go:66] Checking if "auto-344709" exists ...
	I1123 11:20:48.975066  746221 addons.go:70] Setting default-storageclass=true in profile "auto-344709"
	I1123 11:20:48.975082  746221 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-344709"
	I1123 11:20:48.975371  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:48.975535  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:48.978889  746221 out.go:179] * Verifying Kubernetes components...
	I1123 11:20:48.982262  746221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:49.018263  746221 addons.go:239] Setting addon default-storageclass=true in "auto-344709"
	I1123 11:20:49.018314  746221 host.go:66] Checking if "auto-344709" exists ...
	I1123 11:20:49.018751  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:49.020931  746221 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:20:49.024066  746221 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:49.024088  746221 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:20:49.024152  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:49.042022  746221 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:49.042043  746221 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:20:49.042102  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:49.073511  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:49.075840  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:49.396888  746221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:49.495109  746221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:49.495306  746221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 11:20:49.740816  746221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:51.104003  746221 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.60857874s)
	I1123 11:20:51.104080  746221 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 11:20:51.105317  746221 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.609982562s)
	I1123 11:20:51.106280  746221 node_ready.go:35] waiting up to 15m0s for node "auto-344709" to be "Ready" ...
	I1123 11:20:51.106644  746221 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.36579509s)
	I1123 11:20:51.107814  746221 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.710851161s)
	I1123 11:20:51.190050  746221 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 11:20:47.813525  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:49.815232  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	I1123 11:20:51.193071  746221 addons.go:530] duration metric: took 2.218126774s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 11:20:51.610253  746221 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-344709" context rescaled to 1 replicas
	W1123 11:20:53.109331  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:20:55.110607  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:20:51.819771  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:54.315067  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:57.609088  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:00.115989  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:20:56.814887  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:59.314065  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:01.315719  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:02.609202  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:05.109346  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:03.814558  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:05.814910  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:07.109505  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:09.109557  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:07.816436  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:10.315748  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:11.110579  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:13.609632  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:15.609822  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:12.814800  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:15.314158  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:17.609860  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:20.110972  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:17.813930  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:19.814507  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	I1123 11:21:20.314392  746758 pod_ready.go:94] pod "coredns-66bc5c9577-jxjjg" is "Ready"
	I1123 11:21:20.314421  746758 pod_ready.go:86] duration metric: took 39.005645876s for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.317212  746758 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.325232  746758 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:20.325266  746758 pod_ready.go:86] duration metric: took 8.02694ms for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.329646  746758 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.334679  746758 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:20.334705  746758 pod_ready.go:86] duration metric: took 5.030378ms for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.337075  746758 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.512455  746758 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:20.512534  746758 pod_ready.go:86] duration metric: took 175.434107ms for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.712546  746758 pod_ready.go:83] waiting for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.111877  746758 pod_ready.go:94] pod "kube-proxy-kp7fv" is "Ready"
	I1123 11:21:21.111904  746758 pod_ready.go:86] duration metric: took 399.291899ms for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.312461  746758 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.711998  746758 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:21.712027  746758 pod_ready.go:86] duration metric: took 399.489978ms for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.712040  746758 pod_ready.go:40] duration metric: took 40.407085659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:21:21.768928  746758 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:21:21.772121  746758 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103096" cluster and "default" namespace by default
	W1123 11:21:22.111564  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:24.609255  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:26.609355  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:29.109714  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	I1123 11:21:31.110290  746221 node_ready.go:49] node "auto-344709" is "Ready"
	I1123 11:21:31.110320  746221 node_ready.go:38] duration metric: took 40.00398838s for node "auto-344709" to be "Ready" ...
	I1123 11:21:31.110336  746221 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:21:31.110395  746221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:21:31.123053  746221 api_server.go:72] duration metric: took 42.148450236s to wait for apiserver process to appear ...
	I1123 11:21:31.123081  746221 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:21:31.123100  746221 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:21:31.131323  746221 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:21:31.132561  746221 api_server.go:141] control plane version: v1.34.1
	I1123 11:21:31.132594  746221 api_server.go:131] duration metric: took 9.506036ms to wait for apiserver health ...
	I1123 11:21:31.132604  746221 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:21:31.136165  746221 system_pods.go:59] 8 kube-system pods found
	I1123 11:21:31.136209  746221 system_pods.go:61] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.136221  746221 system_pods.go:61] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.136226  746221 system_pods.go:61] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.136230  746221 system_pods.go:61] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.136234  746221 system_pods.go:61] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.136238  746221 system_pods.go:61] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.136241  746221 system_pods.go:61] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.136250  746221 system_pods.go:61] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.136262  746221 system_pods.go:74] duration metric: took 3.65109ms to wait for pod list to return data ...
	I1123 11:21:31.136276  746221 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:21:31.139762  746221 default_sa.go:45] found service account: "default"
	I1123 11:21:31.139789  746221 default_sa.go:55] duration metric: took 3.506127ms for default service account to be created ...
	I1123 11:21:31.139799  746221 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:21:31.143061  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:31.143100  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.143107  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.143137  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.143143  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.143153  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.143158  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.143171  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.143177  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.143213  746221 retry.go:31] will retry after 306.64943ms: missing components: kube-dns
	I1123 11:21:31.455175  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:31.455212  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.455219  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.455225  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.455230  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.455239  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.455245  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.455249  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.455255  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.455274  746221 retry.go:31] will retry after 359.158516ms: missing components: kube-dns
	I1123 11:21:31.818086  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:31.818119  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.818126  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.818132  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.818137  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.818141  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.818146  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.818150  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.818155  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.818171  746221 retry.go:31] will retry after 418.188948ms: missing components: kube-dns
	I1123 11:21:32.240808  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:32.240839  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Running
	I1123 11:21:32.240846  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:32.240850  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:32.240855  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:32.240859  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:32.240864  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:32.240868  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:32.240872  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Running
	I1123 11:21:32.240880  746221 system_pods.go:126] duration metric: took 1.101074689s to wait for k8s-apps to be running ...
	I1123 11:21:32.240892  746221 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:21:32.240949  746221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:21:32.253893  746221 system_svc.go:56] duration metric: took 12.991321ms WaitForService to wait for kubelet
	I1123 11:21:32.253926  746221 kubeadm.go:587] duration metric: took 43.279329303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:21:32.253945  746221 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:21:32.256936  746221 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:21:32.256969  746221 node_conditions.go:123] node cpu capacity is 2
	I1123 11:21:32.256983  746221 node_conditions.go:105] duration metric: took 3.033182ms to run NodePressure ...
	I1123 11:21:32.256996  746221 start.go:242] waiting for startup goroutines ...
	I1123 11:21:32.257003  746221 start.go:247] waiting for cluster config update ...
	I1123 11:21:32.257014  746221 start.go:256] writing updated cluster config ...
	I1123 11:21:32.257336  746221 ssh_runner.go:195] Run: rm -f paused
	I1123 11:21:32.261332  746221 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:21:32.266525  746221 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jc8v8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.271178  746221 pod_ready.go:94] pod "coredns-66bc5c9577-jc8v8" is "Ready"
	I1123 11:21:32.271205  746221 pod_ready.go:86] duration metric: took 4.627546ms for pod "coredns-66bc5c9577-jc8v8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.273672  746221 pod_ready.go:83] waiting for pod "etcd-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.278075  746221 pod_ready.go:94] pod "etcd-auto-344709" is "Ready"
	I1123 11:21:32.278103  746221 pod_ready.go:86] duration metric: took 4.40707ms for pod "etcd-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.280371  746221 pod_ready.go:83] waiting for pod "kube-apiserver-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.284625  746221 pod_ready.go:94] pod "kube-apiserver-auto-344709" is "Ready"
	I1123 11:21:32.284649  746221 pod_ready.go:86] duration metric: took 4.254074ms for pod "kube-apiserver-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.286730  746221 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.666339  746221 pod_ready.go:94] pod "kube-controller-manager-auto-344709" is "Ready"
	I1123 11:21:32.666366  746221 pod_ready.go:86] duration metric: took 379.614742ms for pod "kube-controller-manager-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.865652  746221 pod_ready.go:83] waiting for pod "kube-proxy-6whfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.269706  746221 pod_ready.go:94] pod "kube-proxy-6whfb" is "Ready"
	I1123 11:21:33.269736  746221 pod_ready.go:86] duration metric: took 404.057982ms for pod "kube-proxy-6whfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.469493  746221 pod_ready.go:83] waiting for pod "kube-scheduler-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.865885  746221 pod_ready.go:94] pod "kube-scheduler-auto-344709" is "Ready"
	I1123 11:21:33.865981  746221 pod_ready.go:86] duration metric: took 396.461122ms for pod "kube-scheduler-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.866002  746221 pod_ready.go:40] duration metric: took 1.604639624s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:21:33.919338  746221 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:21:33.924569  746221 out.go:179] * Done! kubectl is now configured to use "auto-344709" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.928895262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.938063209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.938599688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.979409067Z" level=info msg="Created container 80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c/dashboard-metrics-scraper" id=56964595-e299-4a1d-abf6-e473f7601b5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.981498516Z" level=info msg="Starting container: 80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2" id=72eb0936-98ba-4fb2-a6f3-bf95017ae88c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.983584297Z" level=info msg="Started container" PID=1651 containerID=80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c/dashboard-metrics-scraper id=72eb0936-98ba-4fb2-a6f3-bf95017ae88c name=/runtime.v1.RuntimeService/StartContainer sandboxID=76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903
	Nov 23 11:21:14 default-k8s-diff-port-103096 conmon[1649]: conmon 80a118a0fc6115cc5a69 <ninfo>: container 1651 exited with status 1
	Nov 23 11:21:15 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:15.227512529Z" level=info msg="Removing container: 25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7" id=aa108ff3-f10b-4b65-8538-905239a4e476 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:21:15 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:15.241079456Z" level=info msg="Error loading conmon cgroup of container 25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7: cgroup deleted" id=aa108ff3-f10b-4b65-8538-905239a4e476 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:21:15 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:15.246394272Z" level=info msg="Removed container 25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c/dashboard-metrics-scraper" id=aa108ff3-f10b-4b65-8538-905239a4e476 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.085939209Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.093608069Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.093646265Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.093678397Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.096647586Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.096682959Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.096706804Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.100374716Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.100416152Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.100440153Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.103870513Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.103905205Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.103930379Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.107107129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.108211835Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	80a118a0fc611       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   76b2f9c34fdae       dashboard-metrics-scraper-6ffb444bf9-c2w5c             kubernetes-dashboard
	5af6f79168eea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   5ab3337dbbd83       storage-provisioner                                    kube-system
	511509d807681       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   8c908ab843116       kubernetes-dashboard-855c9754f9-7s8z9                  kubernetes-dashboard
	7f04b8a5ddbfa       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   836c074e54465       busybox                                                default
	b339c5fa1ad36       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   5ab3337dbbd83       storage-provisioner                                    kube-system
	19086a27c9d03       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   7188848700e20       kube-proxy-kp7fv                                       kube-system
	2fcda04eae0c4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   f93f52219bbc8       coredns-66bc5c9577-jxjjg                               kube-system
	cd47bb53c6c94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   80cae8baf80ae       kindnet-flj5s                                          kube-system
	e28157e052afe       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c454def6a90b6       kube-scheduler-default-k8s-diff-port-103096            kube-system
	627d497d6c6c1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6dcdefa216a9b       kube-controller-manager-default-k8s-diff-port-103096   kube-system
	21dcb05b52237       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   2ba1f56660b8c       kube-apiserver-default-k8s-diff-port-103096            kube-system
	005536dc4a08c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2d66619de69c2       etcd-default-k8s-diff-port-103096                      kube-system
	
	
	==> coredns [2fcda04eae0c435a3ecda39fde16360c7527d896df39314f18046cd3abfb3b0c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42734 - 2372 "HINFO IN 4171912671443374010.2985892180580319313. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021312765s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103096
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-103096
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-103096
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_18_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:18:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103096
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:21:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:19:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-103096
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                89e61585-704f-4a7a-8b1e-bc99234af9b9
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 coredns-66bc5c9577-jxjjg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m34s
	  kube-system                 etcd-default-k8s-diff-port-103096                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m40s
	  kube-system                 kindnet-flj5s                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m35s
	  kube-system                 kube-apiserver-default-k8s-diff-port-103096             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103096    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-proxy-kp7fv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-scheduler-default-k8s-diff-port-103096             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-c2w5c              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7s8z9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m32s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m49s (x8 over 2m49s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x8 over 2m49s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x8 over 2m49s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m40s                  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s                  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m40s                  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m35s                  node-controller  Node default-k8s-diff-port-103096 event: Registered Node default-k8s-diff-port-103096 in Controller
	  Normal   NodeReady                113s                   kubelet          Node default-k8s-diff-port-103096 status is now: NodeReady
	  Normal   Starting                 74s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 74s)      kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 74s)      kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 74s)      kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-103096 event: Registered Node default-k8s-diff-port-103096 in Controller
	
	
	==> dmesg <==
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	[Nov23 11:19] overlayfs: idmapped layers are currently not supported
	[ +26.182636] overlayfs: idmapped layers are currently not supported
	[Nov23 11:20] overlayfs: idmapped layers are currently not supported
	[  +8.903071] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [005536dc4a08cc2e74db59ff3386adcf759f37c83808ec8e7525227e5627216e] <==
	{"level":"warn","ts":"2025-11-23T11:20:34.805258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.830135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.881574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.915048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.939929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.973574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.993856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.031620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.057041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.101178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.124157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.153328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.190754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.235806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.267480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.298215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.319764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.336391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.364999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.402188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.413965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.432639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.575324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:39.346715Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.296759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T11:20:39.346858Z","caller":"traceutil/trace.go:172","msg":"trace[941285296] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:515; }","duration":"117.463219ms","start":"2025-11-23T11:20:39.229380Z","end":"2025-11-23T11:20:39.346843Z","steps":["trace[941285296] 'agreement among raft nodes before linearized reading'  (duration: 112.343711ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:21:38 up  4:04,  0 user,  load average: 3.51, 3.75, 3.16
	Linux default-k8s-diff-port-103096 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd47bb53c6c9409136a0de45f335cfa1b4ae0d245cb0ee6b78f4018bf100d946] <==
	I1123 11:20:38.782984       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:20:38.794861       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:20:38.795003       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:20:38.795016       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:20:38.795030       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:20:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:20:39.085676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:20:39.085749       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:20:39.085782       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:20:39.086490       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:21:09.086252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:21:09.086283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:21:09.086402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:21:09.086479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:21:10.386121       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:21:10.386153       1 metrics.go:72] Registering metrics
	I1123 11:21:10.386218       1 controller.go:711] "Syncing nftables rules"
	I1123 11:21:19.085521       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:21:19.085674       1 main.go:301] handling current node
	I1123 11:21:29.089645       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:21:29.089678       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21dcb05b52237e1adb39fc6a3d6b76a54c5afd4e77d3efa5312cc8b77bb1d2f4] <==
	I1123 11:20:37.841301       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:20:37.844396       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:20:37.844492       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 11:20:37.845326       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 11:20:37.845368       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:20:37.850277       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 11:20:37.851975       1 aggregator.go:171] initial CRD sync complete...
	I1123 11:20:37.851988       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 11:20:37.851995       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:20:37.852001       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:20:37.915583       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:20:37.950118       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:20:37.959981       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:20:37.971502       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:20:38.029748       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 11:20:38.157769       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:20:40.011121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:20:40.312868       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:20:40.433276       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:20:40.465066       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:20:40.685221       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.0.62"}
	I1123 11:20:40.741654       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.213.108"}
	I1123 11:20:43.112842       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:20:43.166029       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:20:43.670910       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [627d497d6c6c164273a91504576a3eddba3511129b63409f1c12576b1a90ac2f] <==
	I1123 11:20:43.092773       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:20:43.092811       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:20:43.092828       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 11:20:43.092837       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 11:20:43.093071       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:20:43.093085       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:20:43.093173       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:20:43.102151       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:20:43.102215       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:20:43.102231       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:20:43.107381       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:20:43.103986       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:43.118461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 11:20:43.118476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 11:20:43.118539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 11:20:43.118551       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 11:20:43.143946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:43.144038       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:20:43.144075       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:20:43.145067       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:20:43.149630       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:20:43.149795       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:20:43.150934       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-103096"
	I1123 11:20:43.151048       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:20:43.151146       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [19086a27c9d0305f6aaed6b856a8c3465b3c5186f5220a276e23f82da308c4f6] <==
	I1123 11:20:40.237255       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:20:40.539601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:20:40.644848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:20:40.644960       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:20:40.645122       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:20:40.874664       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:20:40.874832       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:20:40.885553       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:20:40.889623       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:20:40.889715       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:20:40.891961       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:20:40.892048       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:20:40.892418       1 config.go:200] "Starting service config controller"
	I1123 11:20:40.892481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:20:40.897250       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:20:40.899688       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:20:40.897391       1 config.go:309] "Starting node config controller"
	I1123 11:20:40.899841       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:20:40.899904       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:20:40.993000       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:20:40.993113       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 11:20:41.000160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e28157e052afed9ccd76d9c030b94bdfeb8d4bd7f67616e87072d6a9e76a9d4f] <==
	E1123 11:20:36.843945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:20:36.844157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:20:36.844219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:20:36.844270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:20:36.844317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:20:36.844365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:20:36.844409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:20:36.844452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:20:36.844504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:20:36.844569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:20:36.844612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:20:36.844657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:20:36.844741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:20:36.844795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:20:36.844834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:20:37.578123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:20:37.757920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:20:37.757989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:20:37.832798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:20:37.832888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:20:37.838111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:20:37.838207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:20:37.838276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:20:37.838329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1123 11:20:38.528424       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838219     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e091cda-84d4-4704-857c-f3e26ae01025-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-c2w5c\" (UID: \"1e091cda-84d4-4704-857c-f3e26ae01025\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c"
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838357     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhp4r\" (UniqueName: \"kubernetes.io/projected/1e091cda-84d4-4704-857c-f3e26ae01025-kube-api-access-rhp4r\") pod \"dashboard-metrics-scraper-6ffb444bf9-c2w5c\" (UID: \"1e091cda-84d4-4704-857c-f3e26ae01025\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c"
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838387     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4nj\" (UniqueName: \"kubernetes.io/projected/e36779bb-5521-45b7-9d2f-74bc1b446af9-kube-api-access-qq4nj\") pod \"kubernetes-dashboard-855c9754f9-7s8z9\" (UID: \"e36779bb-5521-45b7-9d2f-74bc1b446af9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7s8z9"
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838433     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e36779bb-5521-45b7-9d2f-74bc1b446af9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-7s8z9\" (UID: \"e36779bb-5521-45b7-9d2f-74bc1b446af9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7s8z9"
	Nov 23 11:20:44 default-k8s-diff-port-103096 kubelet[780]: W1123 11:20:44.009748     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/crio-8c908ab84311687ab8e486cd95016014c6c797786b846765119daa08bf69d41f WatchSource:0}: Error finding container 8c908ab84311687ab8e486cd95016014c6c797786b846765119daa08bf69d41f: Status 404 returned error can't find the container with id 8c908ab84311687ab8e486cd95016014c6c797786b846765119daa08bf69d41f
	Nov 23 11:20:44 default-k8s-diff-port-103096 kubelet[780]: W1123 11:20:44.031703     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/crio-76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903 WatchSource:0}: Error finding container 76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903: Status 404 returned error can't find the container with id 76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903
	Nov 23 11:20:56 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:56.165487     780 scope.go:117] "RemoveContainer" containerID="0f8a4d98729b1e92227f268f2917bda72b0a9c7f0ee6fd7d66cc5fa820d975de"
	Nov 23 11:20:56 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:56.197991     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7s8z9" podStartSLOduration=6.448097448 podStartE2EDuration="13.197974634s" podCreationTimestamp="2025-11-23 11:20:43 +0000 UTC" firstStartedPulling="2025-11-23 11:20:44.013275933 +0000 UTC m=+19.426767030" lastFinishedPulling="2025-11-23 11:20:50.763153118 +0000 UTC m=+26.176644216" observedRunningTime="2025-11-23 11:20:51.178900586 +0000 UTC m=+26.592391684" watchObservedRunningTime="2025-11-23 11:20:56.197974634 +0000 UTC m=+31.611465740"
	Nov 23 11:20:57 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:57.170561     780 scope.go:117] "RemoveContainer" containerID="0f8a4d98729b1e92227f268f2917bda72b0a9c7f0ee6fd7d66cc5fa820d975de"
	Nov 23 11:20:57 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:57.171811     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:20:57 default-k8s-diff-port-103096 kubelet[780]: E1123 11:20:57.172082     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:20:58 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:58.174922     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:20:58 default-k8s-diff-port-103096 kubelet[780]: E1123 11:20:58.175081     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:03 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:03.991126     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:21:03 default-k8s-diff-port-103096 kubelet[780]: E1123 11:21:03.991929     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:10 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:10.206734     780 scope.go:117] "RemoveContainer" containerID="b339c5fa1ad36460e37650644bac4eb0d7e10ea479d6f995da3370cb86c53cef"
	Nov 23 11:21:14 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:14.915283     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:21:15 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:15.222358     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:21:15 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:15.222635     780 scope.go:117] "RemoveContainer" containerID="80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2"
	Nov 23 11:21:15 default-k8s-diff-port-103096 kubelet[780]: E1123 11:21:15.222799     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:23 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:23.990961     780 scope.go:117] "RemoveContainer" containerID="80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2"
	Nov 23 11:21:23 default-k8s-diff-port-103096 kubelet[780]: E1123 11:21:23.991160     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:35 default-k8s-diff-port-103096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:21:35 default-k8s-diff-port-103096 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:21:35 default-k8s-diff-port-103096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [511509d807681fad8dd77857c090e47e76497556036046e2c6c20640528a4c94] <==
	2025/11/23 11:20:50 Using namespace: kubernetes-dashboard
	2025/11/23 11:20:50 Using in-cluster config to connect to apiserver
	2025/11/23 11:20:50 Using secret token for csrf signing
	2025/11/23 11:20:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:20:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:20:50 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 11:20:50 Generating JWE encryption key
	2025/11/23 11:20:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:20:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:20:52 Initializing JWE encryption key from synchronized object
	2025/11/23 11:20:52 Creating in-cluster Sidecar client
	2025/11/23 11:20:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:20:52 Serving insecurely on HTTP port: 9090
	2025/11/23 11:21:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:20:50 Starting overwatch
	
	
	==> storage-provisioner [5af6f79168eea00838e2945ae540d3eaf1f76e899c71f27379162736cced60d4] <==
	I1123 11:21:10.272293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:21:10.272471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:21:10.274958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:13.729527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:17.989804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:21.588475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:24.642903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:27.665019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:27.669781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:21:27.670004       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:21:27.670323       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e8436e2-f872-447d-b72c-3f2b67de6c08", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-103096_2cc8e612-8973-4847-beb1-c021d2e50dad became leader
	I1123 11:21:27.670373       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103096_2cc8e612-8973-4847-beb1-c021d2e50dad!
	W1123 11:21:27.672148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:27.681285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:21:27.770714       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103096_2cc8e612-8973-4847-beb1-c021d2e50dad!
	W1123 11:21:29.684381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:29.691260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:31.695120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:31.699710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:33.702764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:33.707208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:35.710914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:35.735901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:37.739213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:37.745503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b339c5fa1ad36460e37650644bac4eb0d7e10ea479d6f995da3370cb86c53cef] <==
	I1123 11:20:39.614188       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:21:09.616469       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096: exit status 2 (504.428361ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-103096 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-103096
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-103096:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0",
	        "Created": "2025-11-23T11:18:31.407055739Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 747246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:20:16.990896679Z",
	            "FinishedAt": "2025-11-23T11:20:13.379322144Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/hosts",
	        "LogPath": "/var/lib/docker/containers/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0-json.log",
	        "Name": "/default-k8s-diff-port-103096",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-103096:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-103096",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0",
	                "LowerDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857-init/diff:/var/lib/docker/overlay2/c0018bdcd38c15db395cb08343495c95f3fa418cd092a447373e35400f4f7dc9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8dfd1ba60c8da4ff003a7551a4d1cf0c0393d490ae37ba5538d630938e80857/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103096",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103096/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103096",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103096",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103096",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a73ab855e40ff26e4a27df91e2c4f1d2a8cd2644b47f63c1633e1e08a3f9aea",
	            "SandboxKey": "/var/run/docker/netns/6a73ab855e40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-103096": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:31:1d:bd:f6:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e03847072cf28dc18f7a1d9d48fec693250a4b2bc18a1175017d251775e454c9",
	                    "EndpointID": "4808ebf5eff775a14c532e917ba07536444246523161a154890caaab03070511",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103096",
	                        "ea90e0e4e065"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
E1123 11:21:40.153055  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096: exit status 2 (390.018424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103096 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-103096 logs -n 25: (1.367982205s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p no-preload-258179                                                                                                                                                                                                                          │ no-preload-258179            │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ delete  │ -p disable-driver-mounts-546564                                                                                                                                                                                                               │ disable-driver-mounts-546564 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ image   │ embed-certs-715679 image list --format=json                                                                                                                                                                                                   │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:18 UTC │
	│ pause   │ -p embed-certs-715679 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │                     │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:18 UTC │ 23 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-715679                                                                                                                                                                                                                         │ embed-certs-715679           │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-058071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p newest-cni-058071 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-058071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:19 UTC │
	│ start   │ -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │ 23 Nov 25 11:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103096 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ image   │ newest-cni-058071 image list --format=json                                                                                                                                                                                                    │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ pause   │ -p newest-cni-058071 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │                     │
	│ delete  │ -p newest-cni-058071                                                                                                                                                                                                                          │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ delete  │ -p newest-cni-058071                                                                                                                                                                                                                          │ newest-cni-058071            │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ start   │ -p auto-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-344709                  │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:20 UTC │
	│ start   │ -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:20 UTC │ 23 Nov 25 11:21 UTC │
	│ ssh     │ -p auto-344709 pgrep -a kubelet                                                                                                                                                                                                               │ auto-344709                  │ jenkins │ v1.37.0 │ 23 Nov 25 11:21 UTC │ 23 Nov 25 11:21 UTC │
	│ image   │ default-k8s-diff-port-103096 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:21 UTC │ 23 Nov 25 11:21 UTC │
	│ pause   │ -p default-k8s-diff-port-103096 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103096 │ jenkins │ v1.37.0 │ 23 Nov 25 11:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:20:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:20:16.497843  746758 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:20:16.498494  746758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:20:16.498530  746758 out.go:374] Setting ErrFile to fd 2...
	I1123 11:20:16.498550  746758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:20:16.498851  746758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:20:16.499343  746758 out.go:368] Setting JSON to false
	I1123 11:20:16.500284  746758 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14565,"bootTime":1763882251,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:20:16.500477  746758 start.go:143] virtualization:  
	I1123 11:20:16.504370  746758 out.go:179] * [default-k8s-diff-port-103096] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:20:16.507397  746758 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:20:16.507458  746758 notify.go:221] Checking for updates...
	I1123 11:20:16.512959  746758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:20:16.515819  746758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:16.518627  746758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:20:16.521372  746758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:20:16.524228  746758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:20:16.527568  746758 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:16.528254  746758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:20:16.571153  746758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:20:16.571271  746758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:20:16.687672  746758 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 11:20:16.675188502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:20:16.687766  746758 docker.go:319] overlay module found
	I1123 11:20:16.691144  746758 out.go:179] * Using the docker driver based on existing profile
	I1123 11:20:16.694162  746758 start.go:309] selected driver: docker
	I1123 11:20:16.694192  746758 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:16.694303  746758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:20:16.694957  746758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:20:16.847276  746758 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 11:20:16.833808294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:20:16.848255  746758 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:20:16.848433  746758 cni.go:84] Creating CNI manager for ""
	I1123 11:20:16.848508  746758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:16.848804  746758 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:16.852320  746758 out.go:179] * Starting "default-k8s-diff-port-103096" primary control-plane node in "default-k8s-diff-port-103096" cluster
	I1123 11:20:16.855398  746758 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 11:20:16.858482  746758 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:20:16.861359  746758 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:20:16.861447  746758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:20:16.861467  746758 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 11:20:16.861492  746758 cache.go:65] Caching tarball of preloaded images
	I1123 11:20:16.861580  746758 preload.go:238] Found /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 11:20:16.861589  746758 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 11:20:16.861699  746758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:20:16.894438  746758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:20:16.894458  746758 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:20:16.894474  746758 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:20:16.894504  746758 start.go:360] acquireMachinesLock for default-k8s-diff-port-103096: {Name:mk974e47f06d6cbaa10109a8c2801bcc82e17d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:20:16.894559  746758 start.go:364] duration metric: took 33.116µs to acquireMachinesLock for "default-k8s-diff-port-103096"
	I1123 11:20:16.894577  746758 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:20:16.894583  746758 fix.go:54] fixHost starting: 
	I1123 11:20:16.894855  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:16.934672  746758 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103096: state=Stopped err=<nil>
	W1123 11:20:16.934705  746758 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 11:20:16.119190  746221 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-344709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.110565778s)
	I1123 11:20:16.119225  746221 kic.go:203] duration metric: took 4.110729039s to extract preloaded images to volume ...
	W1123 11:20:16.119369  746221 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 11:20:16.119480  746221 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 11:20:16.188567  746221 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-344709 --name auto-344709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-344709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-344709 --network auto-344709 --ip 192.168.76.2 --volume auto-344709:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 11:20:16.570837  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Running}}
	I1123 11:20:16.636054  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:16.699510  746221 cli_runner.go:164] Run: docker exec auto-344709 stat /var/lib/dpkg/alternatives/iptables
	I1123 11:20:16.783691  746221 oci.go:144] the created container "auto-344709" has a running status.
	I1123 11:20:16.783721  746221 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa...
	I1123 11:20:16.925330  746221 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 11:20:16.954270  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:17.013585  746221 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 11:20:17.013610  746221 kic_runner.go:114] Args: [docker exec --privileged auto-344709 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 11:20:17.112413  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:17.142386  746221 machine.go:94] provisionDockerMachine start ...
	I1123 11:20:17.142487  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:17.171951  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:17.172363  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:17.172375  746221 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:20:17.174585  746221 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:20:20.329194  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-344709
	
	I1123 11:20:20.329219  746221 ubuntu.go:182] provisioning hostname "auto-344709"
	I1123 11:20:20.329282  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:20.346948  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.347268  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:20.347284  746221 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-344709 && echo "auto-344709" | sudo tee /etc/hostname
	I1123 11:20:20.516093  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-344709
	
	I1123 11:20:20.516176  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:20.536028  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.536358  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:20.536380  746221 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-344709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-344709/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-344709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:20:20.693794  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:20:20.693823  746221 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:20:20.693844  746221 ubuntu.go:190] setting up certificates
	I1123 11:20:20.693854  746221 provision.go:84] configureAuth start
	I1123 11:20:20.693912  746221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-344709
	I1123 11:20:20.713164  746221 provision.go:143] copyHostCerts
	I1123 11:20:20.713243  746221 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:20:20.713252  746221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:20:20.713338  746221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:20:20.713464  746221 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:20:20.713471  746221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:20:20.713508  746221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:20:20.713568  746221 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:20:20.713579  746221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:20:20.713605  746221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:20:20.713662  746221 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.auto-344709 san=[127.0.0.1 192.168.76.2 auto-344709 localhost minikube]
	I1123 11:20:20.860627  746221 provision.go:177] copyRemoteCerts
	I1123 11:20:20.860736  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:20:20.860831  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:20.878234  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:20.995037  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:20:21.020610  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1123 11:20:21.042891  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:20:21.063250  746221 provision.go:87] duration metric: took 369.376307ms to configureAuth
	I1123 11:20:21.063286  746221 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:20:21.063470  746221 config.go:182] Loaded profile config "auto-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:21.063567  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.082396  746221 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:21.082707  746221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1123 11:20:21.082721  746221 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:20:16.938641  746758 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103096" ...
	I1123 11:20:16.938728  746758 cli_runner.go:164] Run: docker start default-k8s-diff-port-103096
	I1123 11:20:17.348125  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:17.371951  746758 kic.go:430] container "default-k8s-diff-port-103096" state is running.
	I1123 11:20:17.372389  746758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:20:17.403082  746758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/config.json ...
	I1123 11:20:17.403327  746758 machine.go:94] provisionDockerMachine start ...
	I1123 11:20:17.403389  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:17.441219  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:17.441693  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:17.441706  746758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:20:17.442388  746758 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43746->127.0.0.1:33842: read: connection reset by peer
	I1123 11:20:20.597940  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:20:20.597973  746758 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103096"
	I1123 11:20:20.598073  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:20.625784  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.626185  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:20.626202  746758 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103096 && echo "default-k8s-diff-port-103096" | sudo tee /etc/hostname
	I1123 11:20:20.800551  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103096
	
	I1123 11:20:20.800628  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:20.822444  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:20.822748  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:20.822771  746758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:20:20.990314  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:20:20.990345  746758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-540037/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-540037/.minikube}
	I1123 11:20:20.990381  746758 ubuntu.go:190] setting up certificates
	I1123 11:20:20.990392  746758 provision.go:84] configureAuth start
	I1123 11:20:20.990460  746758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:20:21.015033  746758 provision.go:143] copyHostCerts
	I1123 11:20:21.015107  746758 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem, removing ...
	I1123 11:20:21.015124  746758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem
	I1123 11:20:21.015184  746758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/ca.pem (1082 bytes)
	I1123 11:20:21.015306  746758 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem, removing ...
	I1123 11:20:21.015318  746758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem
	I1123 11:20:21.015341  746758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/cert.pem (1123 bytes)
	I1123 11:20:21.015413  746758 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem, removing ...
	I1123 11:20:21.015424  746758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem
	I1123 11:20:21.015450  746758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-540037/.minikube/key.pem (1675 bytes)
	I1123 11:20:21.015550  746758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103096 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103096 localhost minikube]
	I1123 11:20:21.382219  746758 provision.go:177] copyRemoteCerts
	I1123 11:20:21.382303  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:20:21.382363  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:21.408792  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:21.404458  746221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:20:21.404484  746221 machine.go:97] duration metric: took 4.262079044s to provisionDockerMachine
	I1123 11:20:21.404495  746221 client.go:176] duration metric: took 10.066657192s to LocalClient.Create
	I1123 11:20:21.404516  746221 start.go:167] duration metric: took 10.066720727s to libmachine.API.Create "auto-344709"
	I1123 11:20:21.404523  746221 start.go:293] postStartSetup for "auto-344709" (driver="docker")
	I1123 11:20:21.404533  746221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:20:21.404613  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:20:21.404656  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.427253  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.537633  746221 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:20:21.541885  746221 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:20:21.541960  746221 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:20:21.541985  746221 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:20:21.542079  746221 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:20:21.542212  746221 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:20:21.542370  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:20:21.552039  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:21.574945  746221 start.go:296] duration metric: took 170.408257ms for postStartSetup
	I1123 11:20:21.575426  746221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-344709
	I1123 11:20:21.596878  746221 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/config.json ...
	I1123 11:20:21.597164  746221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:20:21.597214  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.622921  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.734286  746221 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:20:21.738979  746221 start.go:128] duration metric: took 10.404719165s to createHost
	I1123 11:20:21.739006  746221 start.go:83] releasing machines lock for "auto-344709", held for 10.404847948s
	I1123 11:20:21.739076  746221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-344709
	I1123 11:20:21.755812  746221 ssh_runner.go:195] Run: cat /version.json
	I1123 11:20:21.755877  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.756158  746221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:20:21.756216  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:21.783463  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.788217  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:21.901570  746221 ssh_runner.go:195] Run: systemctl --version
	I1123 11:20:21.999125  746221 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:20:22.049998  746221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:20:22.065738  746221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:20:22.065815  746221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:20:22.112851  746221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 11:20:22.112876  746221 start.go:496] detecting cgroup driver to use...
	I1123 11:20:22.112948  746221 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:20:22.113057  746221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:20:22.136375  746221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:20:22.152133  746221 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:20:22.152241  746221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:20:22.174160  746221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:20:22.202293  746221 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:20:22.356730  746221 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:20:22.581300  746221 docker.go:234] disabling docker service ...
	I1123 11:20:22.581380  746221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:20:22.612026  746221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:20:22.627571  746221 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:20:22.757477  746221 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:20:22.912600  746221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:20:22.936283  746221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:20:22.950570  746221 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:20:22.950642  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.959489  746221 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:20:22.959557  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.968313  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.977452  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:22.986298  746221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:20:22.994476  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.004182  746221 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.021500  746221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.031430  746221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:20:23.039777  746221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:20:23.055806  746221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:23.202990  746221 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:20:23.410027  746221 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:20:23.410140  746221 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:20:23.417297  746221 start.go:564] Will wait 60s for crictl version
	I1123 11:20:23.417463  746221 ssh_runner.go:195] Run: which crictl
	I1123 11:20:23.422036  746221 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:20:23.457021  746221 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:20:23.457133  746221 ssh_runner.go:195] Run: crio --version
	I1123 11:20:23.495798  746221 ssh_runner.go:195] Run: crio --version
	I1123 11:20:23.537069  746221 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:20:21.527331  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 11:20:21.548879  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 11:20:21.570394  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:20:21.593505  746758 provision.go:87] duration metric: took 603.086809ms to configureAuth
	I1123 11:20:21.593528  746758 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:20:21.593724  746758 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:21.593824  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:21.621647  746758 main.go:143] libmachine: Using SSH client type: native
	I1123 11:20:21.622008  746758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1123 11:20:21.622022  746758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 11:20:22.052447  746758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 11:20:22.052477  746758 machine.go:97] duration metric: took 4.649139659s to provisionDockerMachine
	I1123 11:20:22.052488  746758 start.go:293] postStartSetup for "default-k8s-diff-port-103096" (driver="docker")
	I1123 11:20:22.052499  746758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:20:22.052559  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:20:22.052632  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.077269  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.190177  746758 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:20:22.194346  746758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:20:22.194374  746758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:20:22.194385  746758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/addons for local assets ...
	I1123 11:20:22.194437  746758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-540037/.minikube/files for local assets ...
	I1123 11:20:22.194517  746758 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem -> 5419002.pem in /etc/ssl/certs
	I1123 11:20:22.194613  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:20:22.204128  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:22.225928  746758 start.go:296] duration metric: took 173.424018ms for postStartSetup
	I1123 11:20:22.226061  746758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:20:22.226130  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.252332  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.370828  746758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:20:22.379168  746758 fix.go:56] duration metric: took 5.484577976s for fixHost
	I1123 11:20:22.379191  746758 start.go:83] releasing machines lock for "default-k8s-diff-port-103096", held for 5.484623588s
	I1123 11:20:22.379260  746758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103096
	I1123 11:20:22.400310  746758 ssh_runner.go:195] Run: cat /version.json
	I1123 11:20:22.400377  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.400310  746758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:20:22.400518  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:22.422933  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.445054  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:22.566796  746758 ssh_runner.go:195] Run: systemctl --version
	I1123 11:20:22.678914  746758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 11:20:22.740792  746758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:20:22.746833  746758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:20:22.746916  746758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:20:22.760456  746758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:20:22.760490  746758 start.go:496] detecting cgroup driver to use...
	I1123 11:20:22.760522  746758 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:20:22.760584  746758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 11:20:22.779526  746758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 11:20:22.801632  746758 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:20:22.801753  746758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:20:22.823760  746758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:20:22.843217  746758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:20:23.004851  746758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:20:23.174557  746758 docker.go:234] disabling docker service ...
	I1123 11:20:23.174687  746758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:20:23.190659  746758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:20:23.206494  746758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:20:23.342795  746758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:20:23.477178  746758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:20:23.492501  746758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:20:23.510089  746758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 11:20:23.510188  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.525938  746758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 11:20:23.526087  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.538523  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.550553  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.565144  746758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:20:23.575206  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.588520  746758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.597915  746758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 11:20:23.610608  746758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:20:23.622228  746758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:20:23.630790  746758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:23.779949  746758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 11:20:23.972748  746758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 11:20:23.972816  746758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 11:20:23.977861  746758 start.go:564] Will wait 60s for crictl version
	I1123 11:20:23.977946  746758 ssh_runner.go:195] Run: which crictl
	I1123 11:20:23.982119  746758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:20:24.009725  746758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 11:20:24.009824  746758 ssh_runner.go:195] Run: crio --version
	I1123 11:20:24.057654  746758 ssh_runner.go:195] Run: crio --version
	I1123 11:20:24.129177  746758 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 11:20:23.540195  746221 cli_runner.go:164] Run: docker network inspect auto-344709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:20:23.561592  746221 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:20:23.565645  746221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:23.578350  746221 kubeadm.go:884] updating cluster {Name:auto-344709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-344709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:20:23.578483  746221 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:20:23.578534  746221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:23.622307  746221 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:23.622330  746221 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:20:23.622375  746221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:23.654470  746221 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:23.654494  746221 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:20:23.654502  746221 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 11:20:23.654642  746221 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-344709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-344709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:20:23.654757  746221 ssh_runner.go:195] Run: crio config
	I1123 11:20:23.758290  746221 cni.go:84] Creating CNI manager for ""
	I1123 11:20:23.758314  746221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:23.758354  746221 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:20:23.758385  746221 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-344709 NodeName:auto-344709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:20:23.758562  746221 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-344709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:20:23.758663  746221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:20:23.768357  746221 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:20:23.768475  746221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:20:23.777191  746221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1123 11:20:23.795693  746221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:20:23.810916  746221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 11:20:23.824125  746221 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:20:23.831042  746221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:23.842702  746221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:23.999167  746221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:24.020799  746221 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709 for IP: 192.168.76.2
	I1123 11:20:24.020820  746221 certs.go:195] generating shared ca certs ...
	I1123 11:20:24.020838  746221 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.021057  746221 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:20:24.021144  746221 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:20:24.021159  746221 certs.go:257] generating profile certs ...
	I1123 11:20:24.021239  746221 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.key
	I1123 11:20:24.021272  746221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt with IP's: []
	I1123 11:20:24.097233  746221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt ...
	I1123 11:20:24.097314  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: {Name:mk39ab0ede81a5b2b03a844fd50c733613ac9e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.097568  746221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.key ...
	I1123 11:20:24.097605  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.key: {Name:mk6ca533ab3ba1c63213a62d24d4f9358494d664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.097760  746221 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2
	I1123 11:20:24.097803  746221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 11:20:24.324625  746221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2 ...
	I1123 11:20:24.324662  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2: {Name:mkce87ccbd35e1c44be5c3f308eb874644b859a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.324920  746221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2 ...
	I1123 11:20:24.324939  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2: {Name:mk5d4e2d27389e157dd9d9eddcde3753ba1f3679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.325077  746221 certs.go:382] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt.a58d22d2 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt
	I1123 11:20:24.325190  746221 certs.go:386] copying /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key.a58d22d2 -> /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key
	I1123 11:20:24.325273  746221 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key
	I1123 11:20:24.325290  746221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt with IP's: []
	I1123 11:20:24.404859  746221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt ...
	I1123 11:20:24.404892  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt: {Name:mk3833ef5569939c10850347236256b52a1378b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.405098  746221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key ...
	I1123 11:20:24.405113  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key: {Name:mk83de226f433f059707d3cc287ca8e81b308213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.405326  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:20:24.405372  746221 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:20:24.405389  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:20:24.405432  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:20:24.405461  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:20:24.405488  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:20:24.405533  746221 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:24.406143  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:20:24.424624  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:20:24.444000  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:20:24.469906  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:20:24.498592  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1123 11:20:24.523723  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:20:24.544661  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:20:24.565342  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 11:20:24.592863  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:20:24.621872  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:20:24.651393  746221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:20:24.682541  746221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:20:24.699028  746221 ssh_runner.go:195] Run: openssl version
	I1123 11:20:24.706193  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:20:24.716286  746221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:20:24.722171  746221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:20:24.722239  746221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:20:24.793362  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:20:24.807184  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:20:24.816269  746221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:20:24.823435  746221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:20:24.823578  746221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:20:24.902372  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:20:24.924353  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:20:24.938619  746221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:24.943410  746221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:24.943475  746221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.001067  746221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:20:25.017576  746221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:20:25.026094  746221 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 11:20:25.026157  746221 kubeadm.go:401] StartCluster: {Name:auto-344709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-344709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:25.026243  746221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:20:25.026317  746221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:20:25.060397  746221 cri.go:89] found id: ""
	I1123 11:20:25.060480  746221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:20:25.077343  746221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 11:20:25.087395  746221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 11:20:25.087466  746221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 11:20:25.100450  746221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 11:20:25.100476  746221 kubeadm.go:158] found existing configuration files:
	
	I1123 11:20:25.100544  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 11:20:25.111991  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 11:20:25.112065  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 11:20:25.122108  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 11:20:25.133866  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 11:20:25.133936  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 11:20:25.144658  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 11:20:25.155993  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 11:20:25.156082  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 11:20:25.165989  746221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 11:20:25.179168  746221 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 11:20:25.179236  746221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 11:20:25.188460  746221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 11:20:25.245795  746221 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 11:20:25.246165  746221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 11:20:25.288671  746221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 11:20:25.288749  746221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 11:20:25.288788  746221 kubeadm.go:319] OS: Linux
	I1123 11:20:25.288838  746221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 11:20:25.288891  746221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 11:20:25.288943  746221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 11:20:25.288994  746221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 11:20:25.289045  746221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 11:20:25.289103  746221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 11:20:25.289152  746221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 11:20:25.289204  746221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 11:20:25.289255  746221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 11:20:25.379620  746221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 11:20:25.379737  746221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 11:20:25.379834  746221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 11:20:25.394000  746221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 11:20:24.132108  746758 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103096 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:20:24.157330  746758 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 11:20:24.161152  746758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:24.171208  746758 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:20:24.171358  746758 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 11:20:24.171407  746758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:24.206800  746758 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:24.206819  746758 crio.go:433] Images already preloaded, skipping extraction
	I1123 11:20:24.206876  746758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:20:24.239696  746758 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 11:20:24.239761  746758 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:20:24.239786  746758 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 11:20:24.239924  746758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:20:24.240040  746758 ssh_runner.go:195] Run: crio config
	I1123 11:20:24.331310  746758 cni.go:84] Creating CNI manager for ""
	I1123 11:20:24.331381  746758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:24.331413  746758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 11:20:24.331464  746758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103096 NodeName:default-k8s-diff-port-103096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:20:24.331632  746758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103096"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:20:24.331721  746758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:20:24.339774  746758 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:20:24.339891  746758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:20:24.347840  746758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 11:20:24.363318  746758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:20:24.376008  746758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1123 11:20:24.388754  746758 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:20:24.392844  746758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:20:24.402434  746758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:24.566714  746758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:24.587023  746758 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096 for IP: 192.168.85.2
	I1123 11:20:24.587103  746758 certs.go:195] generating shared ca certs ...
	I1123 11:20:24.587135  746758 certs.go:227] acquiring lock for ca certs: {Name:mk75b0f2cf00067a6b5d432103f79df30236c4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:24.587329  746758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key
	I1123 11:20:24.587416  746758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key
	I1123 11:20:24.587451  746758 certs.go:257] generating profile certs ...
	I1123 11:20:24.587594  746758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.key
	I1123 11:20:24.587707  746758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key.3484d55d
	I1123 11:20:24.587780  746758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key
	I1123 11:20:24.587929  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem (1338 bytes)
	W1123 11:20:24.587984  746758 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900_empty.pem, impossibly tiny 0 bytes
	I1123 11:20:24.588007  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:20:24.588073  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/ca.pem (1082 bytes)
	I1123 11:20:24.588130  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:20:24.588195  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/certs/key.pem (1675 bytes)
	I1123 11:20:24.588275  746758 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem (1708 bytes)
	I1123 11:20:24.588906  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:20:24.639355  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:20:24.682784  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:20:24.711697  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:20:24.759751  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 11:20:24.817101  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 11:20:24.855934  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:20:24.896935  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:20:24.943187  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:20:24.974831  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/certs/541900.pem --> /usr/share/ca-certificates/541900.pem (1338 bytes)
	I1123 11:20:24.996714  746758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/ssl/certs/5419002.pem --> /usr/share/ca-certificates/5419002.pem (1708 bytes)
	I1123 11:20:25.024422  746758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:20:25.044992  746758 ssh_runner.go:195] Run: openssl version
	I1123 11:20:25.054285  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5419002.pem && ln -fs /usr/share/ca-certificates/5419002.pem /etc/ssl/certs/5419002.pem"
	I1123 11:20:25.066575  746758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5419002.pem
	I1123 11:20:25.072167  746758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:23 /usr/share/ca-certificates/5419002.pem
	I1123 11:20:25.072288  746758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5419002.pem
	I1123 11:20:25.126130  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5419002.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:20:25.135861  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:20:25.146015  746758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.151037  746758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:17 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.151181  746758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:20:25.196830  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:20:25.205816  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/541900.pem && ln -fs /usr/share/ca-certificates/541900.pem /etc/ssl/certs/541900.pem"
	I1123 11:20:25.215430  746758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/541900.pem
	I1123 11:20:25.220210  746758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:23 /usr/share/ca-certificates/541900.pem
	I1123 11:20:25.220335  746758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/541900.pem
	I1123 11:20:25.263107  746758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/541900.pem /etc/ssl/certs/51391683.0"
	I1123 11:20:25.271691  746758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:20:25.276846  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:20:25.320199  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:20:25.419219  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:20:25.488368  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:20:25.556288  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:20:25.629286  746758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:20:25.743904  746758 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103096 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:20:25.744004  746758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 11:20:25.744100  746758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:20:25.841547  746758 cri.go:89] found id: "e28157e052afed9ccd76d9c030b94bdfeb8d4bd7f67616e87072d6a9e76a9d4f"
	I1123 11:20:25.841571  746758 cri.go:89] found id: "627d497d6c6c164273a91504576a3eddba3511129b63409f1c12576b1a90ac2f"
	I1123 11:20:25.841595  746758 cri.go:89] found id: "21dcb05b52237e1adb39fc6a3d6b76a54c5afd4e77d3efa5312cc8b77bb1d2f4"
	I1123 11:20:25.841599  746758 cri.go:89] found id: "005536dc4a08cc2e74db59ff3386adcf759f37c83808ec8e7525227e5627216e"
	I1123 11:20:25.841603  746758 cri.go:89] found id: ""
	I1123 11:20:25.841658  746758 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 11:20:25.863264  746758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T11:20:25Z" level=error msg="open /run/runc: no such file or directory"
	I1123 11:20:25.863360  746758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:20:25.882260  746758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:20:25.882296  746758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:20:25.882357  746758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:20:25.937657  746758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:20:25.938153  746758 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103096" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:25.938277  746758 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-540037/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103096" cluster setting kubeconfig missing "default-k8s-diff-port-103096" context setting]
	I1123 11:20:25.938618  746758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:25.946357  746758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:20:25.972619  746758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 11:20:25.972703  746758 kubeadm.go:602] duration metric: took 90.399699ms to restartPrimaryControlPlane
	I1123 11:20:25.972727  746758 kubeadm.go:403] duration metric: took 228.83321ms to StartCluster
	I1123 11:20:25.972768  746758 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:25.972869  746758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:25.973609  746758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:25.973895  746758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:20:25.974150  746758 config.go:182] Loaded profile config "default-k8s-diff-port-103096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:25.974202  746758 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:20:25.974288  746758 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103096"
	I1123 11:20:25.974322  746758 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103096"
	W1123 11:20:25.974345  746758 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:20:25.974368  746758 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:20:25.974618  746758 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103096"
	I1123 11:20:25.974679  746758 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103096"
	W1123 11:20:25.974700  746758 addons.go:248] addon dashboard should already be in state true
	I1123 11:20:25.974761  746758 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:20:25.974917  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:25.975610  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:25.975865  746758 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103096"
	I1123 11:20:25.975887  746758 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103096"
	I1123 11:20:25.976163  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:25.980183  746758 out.go:179] * Verifying Kubernetes components...
	I1123 11:20:25.983406  746758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:26.026361  746758 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:20:26.029336  746758 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:26.029360  746758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:20:26.029548  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:26.033981  746758 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103096"
	W1123 11:20:26.034000  746758 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:20:26.034038  746758 host.go:66] Checking if "default-k8s-diff-port-103096" exists ...
	I1123 11:20:26.034462  746758 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103096 --format={{.State.Status}}
	I1123 11:20:26.039173  746758 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:20:26.042177  746758 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:20:25.400644  746221 out.go:252]   - Generating certificates and keys ...
	I1123 11:20:25.400743  746221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 11:20:25.400818  746221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 11:20:25.585202  746221 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 11:20:26.033432  746221 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 11:20:26.045103  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:20:26.045129  746758 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:20:26.045200  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:26.081744  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:26.087741  746758 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:26.087762  746758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:20:26.087825  746758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103096
	I1123 11:20:26.114885  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:26.129024  746758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/default-k8s-diff-port-103096/id_rsa Username:docker}
	I1123 11:20:26.369909  746758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:26.396069  746758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:26.397478  746758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:26.276788  746221 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 11:20:26.706092  746221 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 11:20:27.946221  746221 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 11:20:27.946766  746221 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-344709 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:20:28.369885  746221 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 11:20:28.370430  746221 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-344709 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 11:20:28.789767  746221 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 11:20:29.688319  746221 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 11:20:29.899422  746221 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 11:20:29.899957  746221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 11:20:30.867968  746221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 11:20:30.996395  746221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 11:20:26.746244  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:20:26.746270  746758 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:20:26.775134  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:20:26.775210  746758 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:20:26.840104  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:20:26.840124  746758 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:20:26.873874  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:20:26.873893  746758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:20:26.903134  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:20:26.903155  746758 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:20:26.967991  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:20:26.968012  746758 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:20:27.022430  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:20:27.022494  746758 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:20:27.069718  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:20:27.069793  746758 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:20:27.135267  746758 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:20:27.135349  746758 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:20:27.189470  746758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:20:31.570759  746221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 11:20:32.110914  746221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 11:20:32.474132  746221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 11:20:32.475209  746221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 11:20:32.481008  746221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 11:20:32.484427  746221 out.go:252]   - Booting up control plane ...
	I1123 11:20:32.484546  746221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 11:20:32.484639  746221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 11:20:32.484717  746221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 11:20:32.504215  746221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 11:20:32.504333  746221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 11:20:32.517902  746221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 11:20:32.519425  746221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 11:20:32.519481  746221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 11:20:32.766778  746221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 11:20:32.766908  746221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 11:20:33.765769  746221 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00141566s
	I1123 11:20:33.765884  746221 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 11:20:33.765973  746221 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 11:20:33.766067  746221 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 11:20:33.766148  746221 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 11:20:39.973923  746758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.603932794s)
	I1123 11:20:40.368625  746758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.971071875s)
	I1123 11:20:40.368965  746758 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (13.972832604s)
	I1123 11:20:40.368988  746758 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:20:40.419495  746758 node_ready.go:49] node "default-k8s-diff-port-103096" is "Ready"
	I1123 11:20:40.419523  746758 node_ready.go:38] duration metric: took 50.523818ms for node "default-k8s-diff-port-103096" to be "Ready" ...
	I1123 11:20:40.419539  746758 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:20:40.419598  746758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:20:40.748087  746758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.558504893s)
	I1123 11:20:40.748253  746758 api_server.go:72] duration metric: took 14.774300177s to wait for apiserver process to appear ...
	I1123 11:20:40.748270  746758 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:20:40.748289  746758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:20:40.751008  746758 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-103096 addons enable metrics-server
	
	I1123 11:20:40.753841  746758 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 11:20:39.855869  746221 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.088878956s
	I1123 11:20:40.756559  746758 addons.go:530] duration metric: took 14.782352264s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 11:20:40.763922  746758 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:20:40.763962  746758 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:20:41.248484  746758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:20:41.256815  746758 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 11:20:41.257901  746758 api_server.go:141] control plane version: v1.34.1
	I1123 11:20:41.257960  746758 api_server.go:131] duration metric: took 509.679384ms to wait for apiserver health ...
	I1123 11:20:41.257984  746758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:20:41.262901  746758 system_pods.go:59] 8 kube-system pods found
	I1123 11:20:41.262990  746758 system_pods.go:61] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:20:41.263016  746758 system_pods.go:61] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:20:41.263053  746758 system_pods.go:61] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:20:41.263080  746758 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:20:41.263102  746758 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:20:41.263139  746758 system_pods.go:61] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:20:41.263166  746758 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:20:41.263185  746758 system_pods.go:61] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Running
	I1123 11:20:41.263226  746758 system_pods.go:74] duration metric: took 5.20879ms to wait for pod list to return data ...
	I1123 11:20:41.263254  746758 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:20:41.266257  746758 default_sa.go:45] found service account: "default"
	I1123 11:20:41.266318  746758 default_sa.go:55] duration metric: took 3.042185ms for default service account to be created ...
	I1123 11:20:41.266371  746758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:20:41.275250  746758 system_pods.go:86] 8 kube-system pods found
	I1123 11:20:41.275281  746758 system_pods.go:89] "coredns-66bc5c9577-jxjjg" [ace9508d-52f1-425a-9e84-2a8defd07ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:20:41.275292  746758 system_pods.go:89] "etcd-default-k8s-diff-port-103096" [c7fdaaf5-4c79-495c-8f3a-124bf4143e13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:20:41.275300  746758 system_pods.go:89] "kindnet-flj5s" [60f06024-23b3-40d8-8fd0-b02eb7d12f6c] Running
	I1123 11:20:41.275308  746758 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103096" [07508dec-3004-4b72-a567-6d9e5d802e29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:20:41.275312  746758 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103096" [c57764de-1c7b-4256-8936-62dad4986e42] Running
	I1123 11:20:41.275317  746758 system_pods.go:89] "kube-proxy-kp7fv" [fa7fabe6-6495-4392-a507-fb069447788d] Running
	I1123 11:20:41.275323  746758 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103096" [bb5014e3-3b34-4803-a108-1cb3f7de42bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:20:41.275327  746758 system_pods.go:89] "storage-provisioner" [1be632ff-229a-4a85-af86-6e0d2f5d9a75] Running
	I1123 11:20:41.275334  746758 system_pods.go:126] duration metric: took 8.938351ms to wait for k8s-apps to be running ...
	I1123 11:20:41.275341  746758 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:20:41.275396  746758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:20:41.297300  746758 system_svc.go:56] duration metric: took 21.949227ms WaitForService to wait for kubelet
	I1123 11:20:41.297328  746758 kubeadm.go:587] duration metric: took 15.323374242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:20:41.297346  746758 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:20:41.300526  746758 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:20:41.300558  746758 node_conditions.go:123] node cpu capacity is 2
	I1123 11:20:41.300570  746758 node_conditions.go:105] duration metric: took 3.219658ms to run NodePressure ...
	I1123 11:20:41.300583  746758 start.go:242] waiting for startup goroutines ...
	I1123 11:20:41.300590  746758 start.go:247] waiting for cluster config update ...
	I1123 11:20:41.300601  746758 start.go:256] writing updated cluster config ...
	I1123 11:20:41.300881  746758 ssh_runner.go:195] Run: rm -f paused
	I1123 11:20:41.304871  746758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:20:41.308706  746758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:20:41.571665  746221 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.805952544s
	I1123 11:20:43.769124  746221 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002468051s
	I1123 11:20:43.791441  746221 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 11:20:43.814184  746221 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 11:20:43.831244  746221 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 11:20:43.831456  746221 kubeadm.go:319] [mark-control-plane] Marking the node auto-344709 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 11:20:43.844488  746221 kubeadm.go:319] [bootstrap-token] Using token: t0aoo6.ojfbev4u7cauvp1h
	I1123 11:20:43.847497  746221 out.go:252]   - Configuring RBAC rules ...
	I1123 11:20:43.847685  746221 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 11:20:43.853328  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 11:20:43.861572  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 11:20:43.866062  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 11:20:43.873351  746221 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 11:20:43.878428  746221 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 11:20:44.176754  746221 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 11:20:44.843409  746221 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 11:20:45.181358  746221 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 11:20:45.183110  746221 kubeadm.go:319] 
	I1123 11:20:45.183187  746221 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 11:20:45.183194  746221 kubeadm.go:319] 
	I1123 11:20:45.183273  746221 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 11:20:45.183278  746221 kubeadm.go:319] 
	I1123 11:20:45.183303  746221 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 11:20:45.183923  746221 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 11:20:45.183984  746221 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 11:20:45.183989  746221 kubeadm.go:319] 
	I1123 11:20:45.184044  746221 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 11:20:45.184048  746221 kubeadm.go:319] 
	I1123 11:20:45.184096  746221 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 11:20:45.184100  746221 kubeadm.go:319] 
	I1123 11:20:45.184152  746221 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 11:20:45.184228  746221 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 11:20:45.184297  746221 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 11:20:45.184301  746221 kubeadm.go:319] 
	I1123 11:20:45.187287  746221 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 11:20:45.187430  746221 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 11:20:45.187469  746221 kubeadm.go:319] 
	I1123 11:20:45.188577  746221 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t0aoo6.ojfbev4u7cauvp1h \
	I1123 11:20:45.188706  746221 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e \
	I1123 11:20:45.189291  746221 kubeadm.go:319] 	--control-plane 
	I1123 11:20:45.189313  746221 kubeadm.go:319] 
	I1123 11:20:45.189763  746221 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 11:20:45.189776  746221 kubeadm.go:319] 
	I1123 11:20:45.190135  746221 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t0aoo6.ojfbev4u7cauvp1h \
	I1123 11:20:45.190473  746221 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a16d19ded4341ef9ca255f7d8a4937d6268a33b756649b26781ba48fd0877f0e 
	I1123 11:20:45.212056  746221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 11:20:45.212293  746221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 11:20:45.212451  746221 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 11:20:45.212471  746221 cni.go:84] Creating CNI manager for ""
	I1123 11:20:45.212479  746221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 11:20:45.217759  746221 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 11:20:45.221135  746221 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 11:20:45.235149  746221 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 11:20:45.235172  746221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 11:20:45.264867  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 11:20:45.738332  746221 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 11:20:45.738455  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:45.738522  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-344709 minikube.k8s.io/updated_at=2025_11_23T11_20_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=auto-344709 minikube.k8s.io/primary=true
	W1123 11:20:43.314594  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:45.316726  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	I1123 11:20:46.275231  746221 ops.go:34] apiserver oom_adj: -16
	I1123 11:20:46.275344  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:46.776376  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:47.275458  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:47.775745  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:48.276251  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:48.775451  746221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 11:20:48.973258  746221 kubeadm.go:1114] duration metric: took 3.234847916s to wait for elevateKubeSystemPrivileges
	I1123 11:20:48.973289  746221 kubeadm.go:403] duration metric: took 23.947136784s to StartCluster
	I1123 11:20:48.973310  746221 settings.go:142] acquiring lock: {Name:mk55c44c21723ab968c31a7e3fa118d550f42b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:48.973373  746221 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:20:48.974349  746221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/kubeconfig: {Name:mkfc0a2d471e703f0ae61dc4aff4604cad5ec87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:20:48.974567  746221 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 11:20:48.974684  746221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 11:20:48.974927  746221 config.go:182] Loaded profile config "auto-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:20:48.974937  746221 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:20:48.975011  746221 addons.go:70] Setting storage-provisioner=true in profile "auto-344709"
	I1123 11:20:48.975025  746221 addons.go:239] Setting addon storage-provisioner=true in "auto-344709"
	I1123 11:20:48.975048  746221 host.go:66] Checking if "auto-344709" exists ...
	I1123 11:20:48.975066  746221 addons.go:70] Setting default-storageclass=true in profile "auto-344709"
	I1123 11:20:48.975082  746221 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-344709"
	I1123 11:20:48.975371  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:48.975535  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:48.978889  746221 out.go:179] * Verifying Kubernetes components...
	I1123 11:20:48.982262  746221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:20:49.018263  746221 addons.go:239] Setting addon default-storageclass=true in "auto-344709"
	I1123 11:20:49.018314  746221 host.go:66] Checking if "auto-344709" exists ...
	I1123 11:20:49.018751  746221 cli_runner.go:164] Run: docker container inspect auto-344709 --format={{.State.Status}}
	I1123 11:20:49.020931  746221 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:20:49.024066  746221 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:49.024088  746221 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:20:49.024152  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:49.042022  746221 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:49.042043  746221 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:20:49.042102  746221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-344709
	I1123 11:20:49.073511  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:49.075840  746221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/auto-344709/id_rsa Username:docker}
	I1123 11:20:49.396888  746221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:20:49.495109  746221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:20:49.495306  746221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 11:20:49.740816  746221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:20:51.104003  746221 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.60857874s)
	I1123 11:20:51.104080  746221 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 11:20:51.105317  746221 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.609982562s)
	I1123 11:20:51.106280  746221 node_ready.go:35] waiting up to 15m0s for node "auto-344709" to be "Ready" ...
	I1123 11:20:51.106644  746221 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.36579509s)
	I1123 11:20:51.107814  746221 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.710851161s)
	I1123 11:20:51.190050  746221 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 11:20:47.813525  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:49.815232  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	I1123 11:20:51.193071  746221 addons.go:530] duration metric: took 2.218126774s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 11:20:51.610253  746221 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-344709" context rescaled to 1 replicas
	W1123 11:20:53.109331  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:20:55.110607  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:20:51.819771  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:54.315067  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:57.609088  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:00.115989  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:20:56.814887  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:20:59.314065  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:01.315719  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:02.609202  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:05.109346  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:03.814558  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:05.814910  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:07.109505  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:09.109557  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:07.816436  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:10.315748  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:11.110579  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:13.609632  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:15.609822  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:12.814800  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:15.314158  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:17.609860  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:20.110972  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:17.813930  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	W1123 11:21:19.814507  746758 pod_ready.go:104] pod "coredns-66bc5c9577-jxjjg" is not "Ready", error: <nil>
	I1123 11:21:20.314392  746758 pod_ready.go:94] pod "coredns-66bc5c9577-jxjjg" is "Ready"
	I1123 11:21:20.314421  746758 pod_ready.go:86] duration metric: took 39.005645876s for pod "coredns-66bc5c9577-jxjjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.317212  746758 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.325232  746758 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:20.325266  746758 pod_ready.go:86] duration metric: took 8.02694ms for pod "etcd-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.329646  746758 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.334679  746758 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:20.334705  746758 pod_ready.go:86] duration metric: took 5.030378ms for pod "kube-apiserver-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.337075  746758 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.512455  746758 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:20.512534  746758 pod_ready.go:86] duration metric: took 175.434107ms for pod "kube-controller-manager-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:20.712546  746758 pod_ready.go:83] waiting for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.111877  746758 pod_ready.go:94] pod "kube-proxy-kp7fv" is "Ready"
	I1123 11:21:21.111904  746758 pod_ready.go:86] duration metric: took 399.291899ms for pod "kube-proxy-kp7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.312461  746758 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.711998  746758 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103096" is "Ready"
	I1123 11:21:21.712027  746758 pod_ready.go:86] duration metric: took 399.489978ms for pod "kube-scheduler-default-k8s-diff-port-103096" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:21.712040  746758 pod_ready.go:40] duration metric: took 40.407085659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:21:21.768928  746758 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:21:21.772121  746758 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103096" cluster and "default" namespace by default
	W1123 11:21:22.111564  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:24.609255  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:26.609355  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	W1123 11:21:29.109714  746221 node_ready.go:57] node "auto-344709" has "Ready":"False" status (will retry)
	I1123 11:21:31.110290  746221 node_ready.go:49] node "auto-344709" is "Ready"
	I1123 11:21:31.110320  746221 node_ready.go:38] duration metric: took 40.00398838s for node "auto-344709" to be "Ready" ...
	I1123 11:21:31.110336  746221 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:21:31.110395  746221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:21:31.123053  746221 api_server.go:72] duration metric: took 42.148450236s to wait for apiserver process to appear ...
	I1123 11:21:31.123081  746221 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:21:31.123100  746221 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:21:31.131323  746221 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:21:31.132561  746221 api_server.go:141] control plane version: v1.34.1
	I1123 11:21:31.132594  746221 api_server.go:131] duration metric: took 9.506036ms to wait for apiserver health ...
	I1123 11:21:31.132604  746221 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:21:31.136165  746221 system_pods.go:59] 8 kube-system pods found
	I1123 11:21:31.136209  746221 system_pods.go:61] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.136221  746221 system_pods.go:61] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.136226  746221 system_pods.go:61] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.136230  746221 system_pods.go:61] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.136234  746221 system_pods.go:61] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.136238  746221 system_pods.go:61] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.136241  746221 system_pods.go:61] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.136250  746221 system_pods.go:61] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.136262  746221 system_pods.go:74] duration metric: took 3.65109ms to wait for pod list to return data ...
	I1123 11:21:31.136276  746221 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:21:31.139762  746221 default_sa.go:45] found service account: "default"
	I1123 11:21:31.139789  746221 default_sa.go:55] duration metric: took 3.506127ms for default service account to be created ...
	I1123 11:21:31.139799  746221 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:21:31.143061  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:31.143100  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.143107  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.143137  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.143143  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.143153  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.143158  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.143171  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.143177  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.143213  746221 retry.go:31] will retry after 306.64943ms: missing components: kube-dns
	I1123 11:21:31.455175  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:31.455212  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.455219  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.455225  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.455230  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.455239  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.455245  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.455249  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.455255  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.455274  746221 retry.go:31] will retry after 359.158516ms: missing components: kube-dns
	I1123 11:21:31.818086  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:31.818119  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:21:31.818126  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:31.818132  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:31.818137  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:31.818141  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:31.818146  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:31.818150  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:31.818155  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:21:31.818171  746221 retry.go:31] will retry after 418.188948ms: missing components: kube-dns
	I1123 11:21:32.240808  746221 system_pods.go:86] 8 kube-system pods found
	I1123 11:21:32.240839  746221 system_pods.go:89] "coredns-66bc5c9577-jc8v8" [377bc7d4-d3a7-4b1e-a8e1-e6081476c746] Running
	I1123 11:21:32.240846  746221 system_pods.go:89] "etcd-auto-344709" [63f19c00-4df4-478e-8c34-b77e1f644ad0] Running
	I1123 11:21:32.240850  746221 system_pods.go:89] "kindnet-9sj26" [96638586-e05c-44a7-9540-5259874160dc] Running
	I1123 11:21:32.240855  746221 system_pods.go:89] "kube-apiserver-auto-344709" [1db6c6e3-c94a-4f10-8d9a-92472773ec05] Running
	I1123 11:21:32.240859  746221 system_pods.go:89] "kube-controller-manager-auto-344709" [06d26922-970d-4211-8d6d-7b1240d65f39] Running
	I1123 11:21:32.240864  746221 system_pods.go:89] "kube-proxy-6whfb" [03f4ea40-d939-46a8-9469-bfa3348bec96] Running
	I1123 11:21:32.240868  746221 system_pods.go:89] "kube-scheduler-auto-344709" [612ff44e-75f3-470d-bf2c-5f9ec350f507] Running
	I1123 11:21:32.240872  746221 system_pods.go:89] "storage-provisioner" [7964b66a-6a44-4c67-9975-6d963492558f] Running
	I1123 11:21:32.240880  746221 system_pods.go:126] duration metric: took 1.101074689s to wait for k8s-apps to be running ...
	I1123 11:21:32.240892  746221 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:21:32.240949  746221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:21:32.253893  746221 system_svc.go:56] duration metric: took 12.991321ms WaitForService to wait for kubelet
	I1123 11:21:32.253926  746221 kubeadm.go:587] duration metric: took 43.279329303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:21:32.253945  746221 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:21:32.256936  746221 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:21:32.256969  746221 node_conditions.go:123] node cpu capacity is 2
	I1123 11:21:32.256983  746221 node_conditions.go:105] duration metric: took 3.033182ms to run NodePressure ...
	I1123 11:21:32.256996  746221 start.go:242] waiting for startup goroutines ...
	I1123 11:21:32.257003  746221 start.go:247] waiting for cluster config update ...
	I1123 11:21:32.257014  746221 start.go:256] writing updated cluster config ...
	I1123 11:21:32.257336  746221 ssh_runner.go:195] Run: rm -f paused
	I1123 11:21:32.261332  746221 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:21:32.266525  746221 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jc8v8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.271178  746221 pod_ready.go:94] pod "coredns-66bc5c9577-jc8v8" is "Ready"
	I1123 11:21:32.271205  746221 pod_ready.go:86] duration metric: took 4.627546ms for pod "coredns-66bc5c9577-jc8v8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.273672  746221 pod_ready.go:83] waiting for pod "etcd-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.278075  746221 pod_ready.go:94] pod "etcd-auto-344709" is "Ready"
	I1123 11:21:32.278103  746221 pod_ready.go:86] duration metric: took 4.40707ms for pod "etcd-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.280371  746221 pod_ready.go:83] waiting for pod "kube-apiserver-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.284625  746221 pod_ready.go:94] pod "kube-apiserver-auto-344709" is "Ready"
	I1123 11:21:32.284649  746221 pod_ready.go:86] duration metric: took 4.254074ms for pod "kube-apiserver-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.286730  746221 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.666339  746221 pod_ready.go:94] pod "kube-controller-manager-auto-344709" is "Ready"
	I1123 11:21:32.666366  746221 pod_ready.go:86] duration metric: took 379.614742ms for pod "kube-controller-manager-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:32.865652  746221 pod_ready.go:83] waiting for pod "kube-proxy-6whfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.269706  746221 pod_ready.go:94] pod "kube-proxy-6whfb" is "Ready"
	I1123 11:21:33.269736  746221 pod_ready.go:86] duration metric: took 404.057982ms for pod "kube-proxy-6whfb" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.469493  746221 pod_ready.go:83] waiting for pod "kube-scheduler-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.865885  746221 pod_ready.go:94] pod "kube-scheduler-auto-344709" is "Ready"
	I1123 11:21:33.865981  746221 pod_ready.go:86] duration metric: took 396.461122ms for pod "kube-scheduler-auto-344709" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:21:33.866002  746221 pod_ready.go:40] duration metric: took 1.604639624s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:21:33.919338  746221 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:21:33.924569  746221 out.go:179] * Done! kubectl is now configured to use "auto-344709" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.928895262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.938063209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.938599688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.979409067Z" level=info msg="Created container 80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c/dashboard-metrics-scraper" id=56964595-e299-4a1d-abf6-e473f7601b5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.981498516Z" level=info msg="Starting container: 80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2" id=72eb0936-98ba-4fb2-a6f3-bf95017ae88c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 11:21:14 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:14.983584297Z" level=info msg="Started container" PID=1651 containerID=80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c/dashboard-metrics-scraper id=72eb0936-98ba-4fb2-a6f3-bf95017ae88c name=/runtime.v1.RuntimeService/StartContainer sandboxID=76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903
	Nov 23 11:21:14 default-k8s-diff-port-103096 conmon[1649]: conmon 80a118a0fc6115cc5a69 <ninfo>: container 1651 exited with status 1
	Nov 23 11:21:15 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:15.227512529Z" level=info msg="Removing container: 25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7" id=aa108ff3-f10b-4b65-8538-905239a4e476 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:21:15 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:15.241079456Z" level=info msg="Error loading conmon cgroup of container 25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7: cgroup deleted" id=aa108ff3-f10b-4b65-8538-905239a4e476 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:21:15 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:15.246394272Z" level=info msg="Removed container 25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c/dashboard-metrics-scraper" id=aa108ff3-f10b-4b65-8538-905239a4e476 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.085939209Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.093608069Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.093646265Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.093678397Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.096647586Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.096682959Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.096706804Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.100374716Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.100416152Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.100440153Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.103870513Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.103905205Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.103930379Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.107107129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 11:21:19 default-k8s-diff-port-103096 crio[652]: time="2025-11-23T11:21:19.108211835Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	80a118a0fc611       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   76b2f9c34fdae       dashboard-metrics-scraper-6ffb444bf9-c2w5c             kubernetes-dashboard
	5af6f79168eea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   5ab3337dbbd83       storage-provisioner                                    kube-system
	511509d807681       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   8c908ab843116       kubernetes-dashboard-855c9754f9-7s8z9                  kubernetes-dashboard
	7f04b8a5ddbfa       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   836c074e54465       busybox                                                default
	b339c5fa1ad36       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   5ab3337dbbd83       storage-provisioner                                    kube-system
	19086a27c9d03       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   7188848700e20       kube-proxy-kp7fv                                       kube-system
	2fcda04eae0c4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   f93f52219bbc8       coredns-66bc5c9577-jxjjg                               kube-system
	cd47bb53c6c94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   80cae8baf80ae       kindnet-flj5s                                          kube-system
	e28157e052afe       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c454def6a90b6       kube-scheduler-default-k8s-diff-port-103096            kube-system
	627d497d6c6c1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6dcdefa216a9b       kube-controller-manager-default-k8s-diff-port-103096   kube-system
	21dcb05b52237       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   2ba1f56660b8c       kube-apiserver-default-k8s-diff-port-103096            kube-system
	005536dc4a08c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2d66619de69c2       etcd-default-k8s-diff-port-103096                      kube-system
	
	
	==> coredns [2fcda04eae0c435a3ecda39fde16360c7527d896df39314f18046cd3abfb3b0c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42734 - 2372 "HINFO IN 4171912671443374010.2985892180580319313. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021312765s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103096
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-103096
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-103096
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_18_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:18:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103096
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:21:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:21:29 +0000   Sun, 23 Nov 2025 11:19:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-103096
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                89e61585-704f-4a7a-8b1e-bc99234af9b9
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 coredns-66bc5c9577-jxjjg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m37s
	  kube-system                 etcd-default-k8s-diff-port-103096                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m43s
	  kube-system                 kindnet-flj5s                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m38s
	  kube-system                 kube-apiserver-default-k8s-diff-port-103096             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103096    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-proxy-kp7fv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-scheduler-default-k8s-diff-port-103096             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-c2w5c              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7s8z9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m35s                  kube-proxy       
	  Normal   Starting                 60s                    kube-proxy       
	  Normal   Starting                 2m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m52s (x8 over 2m52s)  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m43s                  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m43s                  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m43s                  kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m38s                  node-controller  Node default-k8s-diff-port-103096 event: Registered Node default-k8s-diff-port-103096 in Controller
	  Normal   NodeReady                116s                   kubelet          Node default-k8s-diff-port-103096 status is now: NodeReady
	  Normal   Starting                 77s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 77s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  76s (x8 over 77s)      kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s (x8 over 77s)      kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s (x8 over 77s)      kubelet          Node default-k8s-diff-port-103096 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node default-k8s-diff-port-103096 event: Registered Node default-k8s-diff-port-103096 in Controller
	
	
	==> dmesg <==
	[Nov23 11:01] overlayfs: idmapped layers are currently not supported
	[Nov23 11:02] overlayfs: idmapped layers are currently not supported
	[ +23.523752] overlayfs: idmapped layers are currently not supported
	[Nov23 11:03] overlayfs: idmapped layers are currently not supported
	[Nov23 11:04] overlayfs: idmapped layers are currently not supported
	[Nov23 11:06] overlayfs: idmapped layers are currently not supported
	[Nov23 11:07] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 11:08] overlayfs: idmapped layers are currently not supported
	[ +29.492412] overlayfs: idmapped layers are currently not supported
	[Nov23 11:10] overlayfs: idmapped layers are currently not supported
	[Nov23 11:11] overlayfs: idmapped layers are currently not supported
	[ +52.962235] overlayfs: idmapped layers are currently not supported
	[Nov23 11:12] overlayfs: idmapped layers are currently not supported
	[ +22.863749] overlayfs: idmapped layers are currently not supported
	[Nov23 11:13] overlayfs: idmapped layers are currently not supported
	[Nov23 11:14] overlayfs: idmapped layers are currently not supported
	[Nov23 11:15] overlayfs: idmapped layers are currently not supported
	[Nov23 11:16] overlayfs: idmapped layers are currently not supported
	[Nov23 11:17] overlayfs: idmapped layers are currently not supported
	[ +29.085269] overlayfs: idmapped layers are currently not supported
	[Nov23 11:18] overlayfs: idmapped layers are currently not supported
	[Nov23 11:19] overlayfs: idmapped layers are currently not supported
	[ +26.182636] overlayfs: idmapped layers are currently not supported
	[Nov23 11:20] overlayfs: idmapped layers are currently not supported
	[  +8.903071] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [005536dc4a08cc2e74db59ff3386adcf759f37c83808ec8e7525227e5627216e] <==
	{"level":"warn","ts":"2025-11-23T11:20:34.805258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.830135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.881574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.915048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.939929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.973574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:34.993856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.031620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.057041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.101178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.124157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.153328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.190754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.235806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.267480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.298215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.319764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.336391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.364999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.402188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.413965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.432639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:35.575324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:20:39.346715Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.296759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T11:20:39.346858Z","caller":"traceutil/trace.go:172","msg":"trace[941285296] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:515; }","duration":"117.463219ms","start":"2025-11-23T11:20:39.229380Z","end":"2025-11-23T11:20:39.346843Z","steps":["trace[941285296] 'agreement among raft nodes before linearized reading'  (duration: 112.343711ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:21:41 up  4:04,  0 user,  load average: 3.51, 3.75, 3.16
	Linux default-k8s-diff-port-103096 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd47bb53c6c9409136a0de45f335cfa1b4ae0d245cb0ee6b78f4018bf100d946] <==
	I1123 11:20:38.782984       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:20:38.794861       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:20:38.795003       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:20:38.795016       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:20:38.795030       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:20:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:20:39.085676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:20:39.085749       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:20:39.085782       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:20:39.086490       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:21:09.086252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:21:09.086283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:21:09.086402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:21:09.086479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:21:10.386121       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:21:10.386153       1 metrics.go:72] Registering metrics
	I1123 11:21:10.386218       1 controller.go:711] "Syncing nftables rules"
	I1123 11:21:19.085521       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:21:19.085674       1 main.go:301] handling current node
	I1123 11:21:29.089645       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:21:29.089678       1 main.go:301] handling current node
	I1123 11:21:39.093523       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:21:39.093568       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21dcb05b52237e1adb39fc6a3d6b76a54c5afd4e77d3efa5312cc8b77bb1d2f4] <==
	I1123 11:20:37.841301       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 11:20:37.844396       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 11:20:37.844492       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 11:20:37.845326       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 11:20:37.845368       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:20:37.850277       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 11:20:37.851975       1 aggregator.go:171] initial CRD sync complete...
	I1123 11:20:37.851988       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 11:20:37.851995       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 11:20:37.852001       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:20:37.915583       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 11:20:37.950118       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 11:20:37.959981       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:20:37.971502       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:20:38.029748       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 11:20:38.157769       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 11:20:40.011121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 11:20:40.312868       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:20:40.433276       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:20:40.465066       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:20:40.685221       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.0.62"}
	I1123 11:20:40.741654       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.213.108"}
	I1123 11:20:43.112842       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:20:43.166029       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:20:43.670910       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [627d497d6c6c164273a91504576a3eddba3511129b63409f1c12576b1a90ac2f] <==
	I1123 11:20:43.092773       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 11:20:43.092811       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 11:20:43.092828       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 11:20:43.092837       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 11:20:43.093071       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:20:43.093085       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:20:43.093173       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 11:20:43.102151       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:20:43.102215       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:20:43.102231       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 11:20:43.107381       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:20:43.103986       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:43.118461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 11:20:43.118476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 11:20:43.118539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 11:20:43.118551       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 11:20:43.143946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:20:43.144038       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:20:43.144075       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:20:43.145067       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:20:43.149630       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 11:20:43.149795       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 11:20:43.150934       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-103096"
	I1123 11:20:43.151048       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 11:20:43.151146       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [19086a27c9d0305f6aaed6b856a8c3465b3c5186f5220a276e23f82da308c4f6] <==
	I1123 11:20:40.237255       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:20:40.539601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:20:40.644848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:20:40.644960       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:20:40.645122       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:20:40.874664       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:20:40.874832       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:20:40.885553       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:20:40.889623       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:20:40.889715       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:20:40.891961       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:20:40.892048       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:20:40.892418       1 config.go:200] "Starting service config controller"
	I1123 11:20:40.892481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:20:40.897250       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:20:40.899688       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:20:40.897391       1 config.go:309] "Starting node config controller"
	I1123 11:20:40.899841       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:20:40.899904       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:20:40.993000       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:20:40.993113       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 11:20:41.000160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e28157e052afed9ccd76d9c030b94bdfeb8d4bd7f67616e87072d6a9e76a9d4f] <==
	E1123 11:20:36.843945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:20:36.844157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:20:36.844219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:20:36.844270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:20:36.844317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:20:36.844365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:20:36.844409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:20:36.844452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:20:36.844504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:20:36.844569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:20:36.844612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:20:36.844657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:20:36.844741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:20:36.844795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:20:36.844834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:20:37.578123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:20:37.757920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:20:37.757989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:20:37.832798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:20:37.832888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:20:37.838111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:20:37.838207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:20:37.838276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:20:37.838329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1123 11:20:38.528424       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838219     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e091cda-84d4-4704-857c-f3e26ae01025-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-c2w5c\" (UID: \"1e091cda-84d4-4704-857c-f3e26ae01025\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c"
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838357     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhp4r\" (UniqueName: \"kubernetes.io/projected/1e091cda-84d4-4704-857c-f3e26ae01025-kube-api-access-rhp4r\") pod \"dashboard-metrics-scraper-6ffb444bf9-c2w5c\" (UID: \"1e091cda-84d4-4704-857c-f3e26ae01025\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c"
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838387     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4nj\" (UniqueName: \"kubernetes.io/projected/e36779bb-5521-45b7-9d2f-74bc1b446af9-kube-api-access-qq4nj\") pod \"kubernetes-dashboard-855c9754f9-7s8z9\" (UID: \"e36779bb-5521-45b7-9d2f-74bc1b446af9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7s8z9"
	Nov 23 11:20:43 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:43.838433     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e36779bb-5521-45b7-9d2f-74bc1b446af9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-7s8z9\" (UID: \"e36779bb-5521-45b7-9d2f-74bc1b446af9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7s8z9"
	Nov 23 11:20:44 default-k8s-diff-port-103096 kubelet[780]: W1123 11:20:44.009748     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/crio-8c908ab84311687ab8e486cd95016014c6c797786b846765119daa08bf69d41f WatchSource:0}: Error finding container 8c908ab84311687ab8e486cd95016014c6c797786b846765119daa08bf69d41f: Status 404 returned error can't find the container with id 8c908ab84311687ab8e486cd95016014c6c797786b846765119daa08bf69d41f
	Nov 23 11:20:44 default-k8s-diff-port-103096 kubelet[780]: W1123 11:20:44.031703     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ea90e0e4e065a435531c6125ad0e4b420e536fa37f8b91cc6926a0ee44797fb0/crio-76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903 WatchSource:0}: Error finding container 76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903: Status 404 returned error can't find the container with id 76b2f9c34fdaebb540047f32cde12e2f2e5a17d8a6b6378d8fe35b5942b75903
	Nov 23 11:20:56 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:56.165487     780 scope.go:117] "RemoveContainer" containerID="0f8a4d98729b1e92227f268f2917bda72b0a9c7f0ee6fd7d66cc5fa820d975de"
	Nov 23 11:20:56 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:56.197991     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7s8z9" podStartSLOduration=6.448097448 podStartE2EDuration="13.197974634s" podCreationTimestamp="2025-11-23 11:20:43 +0000 UTC" firstStartedPulling="2025-11-23 11:20:44.013275933 +0000 UTC m=+19.426767030" lastFinishedPulling="2025-11-23 11:20:50.763153118 +0000 UTC m=+26.176644216" observedRunningTime="2025-11-23 11:20:51.178900586 +0000 UTC m=+26.592391684" watchObservedRunningTime="2025-11-23 11:20:56.197974634 +0000 UTC m=+31.611465740"
	Nov 23 11:20:57 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:57.170561     780 scope.go:117] "RemoveContainer" containerID="0f8a4d98729b1e92227f268f2917bda72b0a9c7f0ee6fd7d66cc5fa820d975de"
	Nov 23 11:20:57 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:57.171811     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:20:57 default-k8s-diff-port-103096 kubelet[780]: E1123 11:20:57.172082     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:20:58 default-k8s-diff-port-103096 kubelet[780]: I1123 11:20:58.174922     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:20:58 default-k8s-diff-port-103096 kubelet[780]: E1123 11:20:58.175081     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:03 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:03.991126     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:21:03 default-k8s-diff-port-103096 kubelet[780]: E1123 11:21:03.991929     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:10 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:10.206734     780 scope.go:117] "RemoveContainer" containerID="b339c5fa1ad36460e37650644bac4eb0d7e10ea479d6f995da3370cb86c53cef"
	Nov 23 11:21:14 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:14.915283     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:21:15 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:15.222358     780 scope.go:117] "RemoveContainer" containerID="25051006dc35008003be478db23c26dacc232a1b6e9cf68f429823dc256721a7"
	Nov 23 11:21:15 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:15.222635     780 scope.go:117] "RemoveContainer" containerID="80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2"
	Nov 23 11:21:15 default-k8s-diff-port-103096 kubelet[780]: E1123 11:21:15.222799     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:23 default-k8s-diff-port-103096 kubelet[780]: I1123 11:21:23.990961     780 scope.go:117] "RemoveContainer" containerID="80a118a0fc6115cc5a698aaaa57b1182240f0c2a51289274aab17c4a334fa2b2"
	Nov 23 11:21:23 default-k8s-diff-port-103096 kubelet[780]: E1123 11:21:23.991160     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c2w5c_kubernetes-dashboard(1e091cda-84d4-4704-857c-f3e26ae01025)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c2w5c" podUID="1e091cda-84d4-4704-857c-f3e26ae01025"
	Nov 23 11:21:35 default-k8s-diff-port-103096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 11:21:35 default-k8s-diff-port-103096 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 11:21:35 default-k8s-diff-port-103096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [511509d807681fad8dd77857c090e47e76497556036046e2c6c20640528a4c94] <==
	2025/11/23 11:20:50 Using namespace: kubernetes-dashboard
	2025/11/23 11:20:50 Using in-cluster config to connect to apiserver
	2025/11/23 11:20:50 Using secret token for csrf signing
	2025/11/23 11:20:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 11:20:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 11:20:50 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 11:20:50 Generating JWE encryption key
	2025/11/23 11:20:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 11:20:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 11:20:52 Initializing JWE encryption key from synchronized object
	2025/11/23 11:20:52 Creating in-cluster Sidecar client
	2025/11/23 11:20:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:20:52 Serving insecurely on HTTP port: 9090
	2025/11/23 11:21:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 11:20:50 Starting overwatch
	
	
	==> storage-provisioner [5af6f79168eea00838e2945ae540d3eaf1f76e899c71f27379162736cced60d4] <==
	W1123 11:21:10.274958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:13.729527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:17.989804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:21.588475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:24.642903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:27.665019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:27.669781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:21:27.670004       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:21:27.670323       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e8436e2-f872-447d-b72c-3f2b67de6c08", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-103096_2cc8e612-8973-4847-beb1-c021d2e50dad became leader
	I1123 11:21:27.670373       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103096_2cc8e612-8973-4847-beb1-c021d2e50dad!
	W1123 11:21:27.672148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:27.681285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:21:27.770714       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103096_2cc8e612-8973-4847-beb1-c021d2e50dad!
	W1123 11:21:29.684381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:29.691260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:31.695120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:31.699710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:33.702764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:33.707208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:35.710914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:35.735901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:37.739213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:37.745503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:39.748450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:21:39.760309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b339c5fa1ad36460e37650644bac4eb0d7e10ea479d6f995da3370cb86c53cef] <==
	I1123 11:20:39.614188       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 11:21:09.616469       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096: exit status 2 (377.432069ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-103096 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.60s)
E1123 11:27:15.566344  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.38
9 TestDownloadOnly/v1.28.0/DeleteAll 0.41
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.34.1/json-events 6.65
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 163.73
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.76
48 TestAddons/StoppedEnableDisable 12.44
49 TestCertOptions 37.07
50 TestCertExpiration 244.76
52 TestForceSystemdFlag 36.16
53 TestForceSystemdEnv 41.89
58 TestErrorSpam/setup 33.9
59 TestErrorSpam/start 0.83
60 TestErrorSpam/status 1.17
61 TestErrorSpam/pause 7.03
62 TestErrorSpam/unpause 5.48
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 77.93
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.63
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
75 TestFunctional/serial/CacheCmd/cache/add_local 1.34
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 30.75
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.55
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 10.92
91 TestFunctional/parallel/DryRun 0.6
92 TestFunctional/parallel/InternationalLanguage 0.31
93 TestFunctional/parallel/StatusCmd 1.13
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 25.44
101 TestFunctional/parallel/SSHCmd 0.92
102 TestFunctional/parallel/CpCmd 2.12
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.2
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.35
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 7.02
130 TestFunctional/parallel/MountCmd/specific-port 1.69
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.28
132 TestFunctional/parallel/ServiceCmd/List 0.68
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 0.98
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
144 TestFunctional/parallel/ImageCommands/Setup 0.64
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 207.17
163 TestMultiControlPlane/serial/DeployApp 6.91
164 TestMultiControlPlane/serial/PingHostFromPods 1.48
165 TestMultiControlPlane/serial/AddWorkerNode 59.37
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.26
169 TestMultiControlPlane/serial/StopSecondaryNode 12.88
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 33.35
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.18
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 102.6
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.66
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
176 TestMultiControlPlane/serial/StopCluster 36.17
177 TestMultiControlPlane/serial/RestartCluster 89.37
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 84.08
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.12
185 TestJSONOutput/start/Command 80.95
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.91
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 40.36
211 TestKicCustomNetwork/use_default_bridge_network 34.99
212 TestKicExistingNetwork 33.55
213 TestKicCustomSubnet 36.51
214 TestKicStaticIP 35.53
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 74.26
219 TestMountStart/serial/StartWithMountFirst 8.74
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 8.57
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 8.41
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 138.04
231 TestMultiNode/serial/DeployApp2Nodes 5.08
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 56.63
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.75
236 TestMultiNode/serial/CopyFile 10.67
237 TestMultiNode/serial/StopNode 2.43
238 TestMultiNode/serial/StartAfterStop 8.28
239 TestMultiNode/serial/RestartKeepsNodes 77.33
240 TestMultiNode/serial/DeleteNode 5.72
241 TestMultiNode/serial/StopMultiNode 24.05
242 TestMultiNode/serial/RestartMultiNode 56.62
243 TestMultiNode/serial/ValidateNameConflict 37.74
248 TestPreload 122.18
250 TestScheduledStopUnix 108.66
253 TestInsufficientStorage 13.22
254 TestRunningBinaryUpgrade 66.22
256 TestKubernetesUpgrade 112.58
257 TestMissingContainerUpgrade 116.19
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 38.02
261 TestNoKubernetes/serial/StartWithStopK8s 28.11
262 TestNoKubernetes/serial/Start 9.58
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
265 TestNoKubernetes/serial/ProfileList 0.71
266 TestNoKubernetes/serial/Stop 1.28
267 TestNoKubernetes/serial/StartNoArgs 9.97
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.48
269 TestStoppedBinaryUpgrade/Setup 0.78
270 TestStoppedBinaryUpgrade/Upgrade 67.62
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
280 TestPause/serial/Start 87.4
284 TestPause/serial/SecondStartNoReconfiguration 19.8
289 TestNetworkPlugins/group/false 3.87
295 TestStartStop/group/old-k8s-version/serial/FirstStart 60.82
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
298 TestStartStop/group/old-k8s-version/serial/Stop 12.03
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
300 TestStartStop/group/old-k8s-version/serial/SecondStart 55.56
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
306 TestStartStop/group/no-preload/serial/FirstStart 79.29
308 TestStartStop/group/embed-certs/serial/FirstStart 89.42
309 TestStartStop/group/no-preload/serial/DeployApp 9.59
311 TestStartStop/group/no-preload/serial/Stop 12.13
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/no-preload/serial/SecondStart 48.34
314 TestStartStop/group/embed-certs/serial/DeployApp 8.44
316 TestStartStop/group/embed-certs/serial/Stop 12.46
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 54.87
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.82
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.15
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
330 TestStartStop/group/newest-cni/serial/FirstStart 37.43
331 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/Stop 1.35
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
335 TestStartStop/group/newest-cni/serial/SecondStart 17.45
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.58
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.85
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
343 TestNetworkPlugins/group/auto/Start 82.91
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 65.76
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
348 TestNetworkPlugins/group/auto/KubeletFlags 0.3
349 TestNetworkPlugins/group/auto/NetCatPod 12.37
350 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
352 TestNetworkPlugins/group/kindnet/Start 89.13
353 TestNetworkPlugins/group/auto/DNS 0.23
354 TestNetworkPlugins/group/auto/Localhost 0.15
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestNetworkPlugins/group/calico/Start 59.18
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/KubeletFlags 0.31
360 TestNetworkPlugins/group/calico/NetCatPod 10.29
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
362 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
363 TestNetworkPlugins/group/calico/DNS 0.16
364 TestNetworkPlugins/group/calico/Localhost 0.14
365 TestNetworkPlugins/group/calico/HairPin 0.14
366 TestNetworkPlugins/group/kindnet/DNS 0.19
367 TestNetworkPlugins/group/kindnet/Localhost 0.13
368 TestNetworkPlugins/group/kindnet/HairPin 0.13
369 TestNetworkPlugins/group/custom-flannel/Start 66.8
370 TestNetworkPlugins/group/enable-default-cni/Start 76.36
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.63
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
373 TestNetworkPlugins/group/custom-flannel/DNS 0.17
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
381 TestNetworkPlugins/group/flannel/Start 64.78
382 TestNetworkPlugins/group/bridge/Start 48.41
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
384 TestNetworkPlugins/group/bridge/NetCatPod 10.31
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
387 TestNetworkPlugins/group/flannel/NetCatPod 10.28
388 TestNetworkPlugins/group/bridge/DNS 0.16
389 TestNetworkPlugins/group/bridge/Localhost 0.14
390 TestNetworkPlugins/group/bridge/HairPin 0.13
391 TestNetworkPlugins/group/flannel/DNS 0.19
392 TestNetworkPlugins/group/flannel/Localhost 0.23
393 TestNetworkPlugins/group/flannel/HairPin 0.19
x
+
TestDownloadOnly/v1.28.0/json-events (10.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-038654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-038654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.619859453s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 10:16:44.802307  541900 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 10:16:44.802403  541900 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-038654
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-038654: exit status 85 (382.193108ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-038654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-038654 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:16:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:16:34.231935  541906 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:34.232056  541906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:34.232067  541906 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:34.232072  541906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:34.232318  541906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	W1123 10:16:34.232436  541906 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21968-540037/.minikube/config/config.json: open /home/jenkins/minikube-integration/21968-540037/.minikube/config/config.json: no such file or directory
	I1123 10:16:34.232819  541906 out.go:368] Setting JSON to true
	I1123 10:16:34.233693  541906 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10743,"bootTime":1763882251,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:16:34.233760  541906 start.go:143] virtualization:  
	I1123 10:16:34.239763  541906 out.go:99] [download-only-038654] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1123 10:16:34.239927  541906 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 10:16:34.239993  541906 notify.go:221] Checking for updates...
	I1123 10:16:34.243122  541906 out.go:171] MINIKUBE_LOCATION=21968
	I1123 10:16:34.246568  541906 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:16:34.249758  541906 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:16:34.252842  541906 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 10:16:34.255848  541906 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 10:16:34.261649  541906 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 10:16:34.261900  541906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:16:34.288026  541906 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:16:34.288129  541906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:34.354868  541906 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 10:16:34.341450153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:34.354978  541906 docker.go:319] overlay module found
	I1123 10:16:34.358014  541906 out.go:99] Using the docker driver based on user configuration
	I1123 10:16:34.358055  541906 start.go:309] selected driver: docker
	I1123 10:16:34.358063  541906 start.go:927] validating driver "docker" against <nil>
	I1123 10:16:34.358176  541906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:34.416072  541906 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 10:16:34.406529225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:34.416229  541906 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:16:34.416539  541906 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 10:16:34.416694  541906 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 10:16:34.419854  541906 out.go:171] Using Docker driver with root privileges
	I1123 10:16:34.422965  541906 cni.go:84] Creating CNI manager for ""
	I1123 10:16:34.423055  541906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:34.423067  541906 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:16:34.423166  541906 start.go:353] cluster config:
	{Name:download-only-038654 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-038654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:34.426178  541906 out.go:99] Starting "download-only-038654" primary control-plane node in "download-only-038654" cluster
	I1123 10:16:34.426199  541906 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:16:34.429090  541906 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:16:34.429138  541906 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 10:16:34.429225  541906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:16:34.446536  541906 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 10:16:34.446726  541906 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 10:16:34.446840  541906 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 10:16:34.482114  541906 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 10:16:34.482150  541906 cache.go:65] Caching tarball of preloaded images
	I1123 10:16:34.482341  541906 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 10:16:34.485631  541906 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 10:16:34.485657  541906 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1123 10:16:34.571691  541906 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1123 10:16:34.571825  541906 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 10:16:38.880605  541906 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 10:16:38.880994  541906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/download-only-038654/config.json ...
	I1123 10:16:38.881033  541906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/download-only-038654/config.json: {Name:mkc276460db0528ac9678360860f67f81637c3e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:38.881228  541906 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 10:16:38.881436  541906 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-038654 host does not exist
	  To start a cluster, run: "minikube start -p download-only-038654"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-038654
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (6.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-263851 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-263851 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.652323532s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (6.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 10:16:52.476965  541900 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 10:16:52.476999  541900 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-263851
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-263851: exit status 85 (93.560816ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-038654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-038654 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p download-only-038654                                                                                                                                                   │ download-only-038654 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ -o=json --download-only -p download-only-263851 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-263851 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:16:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:16:45.871571  542103 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:45.871699  542103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:45.871712  542103 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:45.871718  542103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:45.872573  542103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:16:45.873100  542103 out.go:368] Setting JSON to true
	I1123 10:16:45.874042  542103 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10755,"bootTime":1763882251,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:16:45.874144  542103 start.go:143] virtualization:  
	I1123 10:16:45.893608  542103 out.go:99] [download-only-263851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:16:45.893976  542103 notify.go:221] Checking for updates...
	I1123 10:16:45.926777  542103 out.go:171] MINIKUBE_LOCATION=21968
	I1123 10:16:45.958522  542103 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:16:45.989600  542103 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:16:46.021883  542103 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 10:16:46.054777  542103 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 10:16:46.118170  542103 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 10:16:46.118523  542103 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:16:46.140621  542103 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:16:46.140728  542103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:46.196038  542103 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 10:16:46.186440044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:46.196144  542103 docker.go:319] overlay module found
	I1123 10:16:46.209513  542103 out.go:99] Using the docker driver based on user configuration
	I1123 10:16:46.209556  542103 start.go:309] selected driver: docker
	I1123 10:16:46.209562  542103 start.go:927] validating driver "docker" against <nil>
	I1123 10:16:46.209683  542103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:46.261996  542103 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 10:16:46.252503383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:16:46.262154  542103 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:16:46.262470  542103 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 10:16:46.262625  542103 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 10:16:46.267056  542103 out.go:171] Using Docker driver with root privileges
	I1123 10:16:46.270979  542103 cni.go:84] Creating CNI manager for ""
	I1123 10:16:46.271063  542103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:46.271077  542103 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:16:46.271152  542103 start.go:353] cluster config:
	{Name:download-only-263851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-263851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:46.275080  542103 out.go:99] Starting "download-only-263851" primary control-plane node in "download-only-263851" cluster
	I1123 10:16:46.275102  542103 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:16:46.278778  542103 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:16:46.278816  542103 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:46.278979  542103 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:16:46.294440  542103 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 10:16:46.294583  542103 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 10:16:46.294605  542103 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 10:16:46.294614  542103 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 10:16:46.294621  542103 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 10:16:46.349833  542103 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:16:46.349855  542103 cache.go:65] Caching tarball of preloaded images
	I1123 10:16:46.350021  542103 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:46.353826  542103 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 10:16:46.353851  542103 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1123 10:16:46.440101  542103 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1123 10:16:46.440153  542103 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21968-540037/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-263851 host does not exist
	  To start a cluster, run: "minikube start -p download-only-263851"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-263851
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 10:16:53.639731  541900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-279599 --alsologtostderr --binary-mirror http://127.0.0.1:42529 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-279599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-279599
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-832672
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-832672: exit status 85 (73.131014ms)

                                                
                                                
-- stdout --
	* Profile "addons-832672" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-832672"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-832672
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-832672: exit status 85 (82.032664ms)

                                                
                                                
-- stdout --
	* Profile "addons-832672" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-832672"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (163.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-832672 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-832672 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m43.725039305s)
--- PASS: TestAddons/Setup (163.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-832672 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-832672 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.76s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-832672 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-832672 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f1e6fcce-b41c-4d8a-9acf-bf6a8f5ec15c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f1e6fcce-b41c-4d8a-9acf-bf6a8f5ec15c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003302816s
addons_test.go:694: (dbg) Run:  kubectl --context addons-832672 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-832672 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-832672 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-832672 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-832672
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-832672: (12.158855399s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-832672
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-832672
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-832672
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (37.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-700578 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-700578 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.194346322s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-700578 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-700578 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-700578 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-700578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-700578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-700578: (2.138228261s)
--- PASS: TestCertOptions (37.07s)

                                                
                                    
x
+
TestCertExpiration (244.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-629387 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-629387 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.031265997s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-629387 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.765191946s)
helpers_test.go:175: Cleaning up "cert-expiration-629387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-629387
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-629387: (2.96103264s)
--- PASS: TestCertExpiration (244.76s)

                                                
                                    
x
+
TestForceSystemdFlag (36.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-332069 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-332069 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.293181985s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-332069 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-332069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-332069
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-332069: (2.505338288s)
--- PASS: TestForceSystemdFlag (36.16s)

                                                
                                    
x
+
TestForceSystemdEnv (41.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-613417 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.040219981s)
helpers_test.go:175: Cleaning up "force-systemd-env-613417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-613417
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-613417: (2.85360188s)
--- PASS: TestForceSystemdEnv (41.89s)

                                                
                                    
x
+
TestErrorSpam/setup (33.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-561662 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-561662 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-561662 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-561662 --driver=docker  --container-runtime=crio: (33.898168576s)
--- PASS: TestErrorSpam/setup (33.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (7.03s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause: exit status 80 (2.252660908s)

                                                
                                                
-- stdout --
	* Pausing node nospam-561662 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:23:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause: exit status 80 (2.320570337s)

                                                
                                                
-- stdout --
	* Pausing node nospam-561662 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:23:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause: exit status 80 (2.458735589s)

                                                
                                                
-- stdout --
	* Pausing node nospam-561662 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:23:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.03s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause: exit status 80 (1.986294878s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-561662 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause: exit status 80 (1.811929888s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-561662 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:23:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause: exit status 80 (1.676416586s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-561662 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:23:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 stop: (1.314796327s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561662 --log_dir /tmp/nospam-561662 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21968-540037/.minikube/files/etc/test/nested/copy/541900/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-336858 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1123 10:24:38.860638  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:38.867037  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:38.879277  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:38.900667  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:38.942165  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:39.023719  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:39.185314  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:39.507149  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:40.148852  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:41.430655  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:43.993541  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:49.114915  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:24:59.357208  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-336858 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.932173645s)
--- PASS: TestFunctional/serial/StartWithProxy (77.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 10:25:09.248015  541900 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-336858 --alsologtostderr -v=8
E1123 10:25:19.838698  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-336858 --alsologtostderr -v=8: (43.632847956s)
functional_test.go:678: soft start took 43.633363681s for "functional-336858" cluster.
I1123 10:25:52.881157  541900 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (43.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-336858 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 cache add registry.k8s.io/pause:3.1: (1.228770984s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 cache add registry.k8s.io/pause:3.3: (1.211508287s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 cache add registry.k8s.io/pause:latest: (1.108362585s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-336858 /tmp/TestFunctionalserialCacheCmdcacheadd_local450200414/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cache add minikube-local-cache-test:functional-336858
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cache delete minikube-local-cache-test:functional-336858
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-336858
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.888729ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 kubectl -- --context functional-336858 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-336858 get pods
E1123 10:26:00.800091  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-336858 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-336858 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.751383528s)
functional_test.go:776: restart took 30.751482138s for "functional-336858" cluster.
I1123 10:26:31.637188  541900 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (30.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-336858 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 logs: (1.460671241s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 logs --file /tmp/TestFunctionalserialLogsFileCmd830843002/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 logs --file /tmp/TestFunctionalserialLogsFileCmd830843002/001/logs.txt: (1.476547338s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-336858 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-336858
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-336858: exit status 115 (418.335883ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31960 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-336858 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 config get cpus: exit status 14 (77.042047ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 config get cpus: exit status 14 (83.202922ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-336858 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-336858 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 567950: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-336858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-336858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (251.491435ms)

                                                
                                                
-- stdout --
	* [functional-336858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:37:06.449925  567416 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:37:06.450076  567416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:37:06.450100  567416 out.go:374] Setting ErrFile to fd 2...
	I1123 10:37:06.450105  567416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:37:06.450417  567416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:37:06.450852  567416 out.go:368] Setting JSON to false
	I1123 10:37:06.451762  567416 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11975,"bootTime":1763882251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:37:06.451882  567416 start.go:143] virtualization:  
	I1123 10:37:06.455416  567416 out.go:179] * [functional-336858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:37:06.458638  567416 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:37:06.458709  567416 notify.go:221] Checking for updates...
	I1123 10:37:06.465030  567416 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:37:06.469118  567416 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:37:06.471933  567416 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 10:37:06.477231  567416 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:37:06.480227  567416 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:37:06.483748  567416 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:37:06.484647  567416 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:37:06.525170  567416 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:37:06.525395  567416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:37:06.611712  567416 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:37:06.5985405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:37:06.611808  567416 docker.go:319] overlay module found
	I1123 10:37:06.615008  567416 out.go:179] * Using the docker driver based on existing profile
	I1123 10:37:06.617883  567416 start.go:309] selected driver: docker
	I1123 10:37:06.617911  567416 start.go:927] validating driver "docker" against &{Name:functional-336858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-336858 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:37:06.618001  567416 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:37:06.622527  567416 out.go:203] 
	W1123 10:37:06.627527  567416 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 10:37:06.630238  567416 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-336858 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-336858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-336858 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (308.869617ms)

                                                
                                                
-- stdout --
	* [functional-336858] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:37:06.167871  567334 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:37:06.168044  567334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:37:06.168072  567334 out.go:374] Setting ErrFile to fd 2...
	I1123 10:37:06.168094  567334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:37:06.168498  567334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:37:06.168898  567334 out.go:368] Setting JSON to false
	I1123 10:37:06.169879  567334 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11975,"bootTime":1763882251,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:37:06.169959  567334 start.go:143] virtualization:  
	I1123 10:37:06.173879  567334 out.go:179] * [functional-336858] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1123 10:37:06.178667  567334 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:37:06.178840  567334 notify.go:221] Checking for updates...
	I1123 10:37:06.186256  567334 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:37:06.189080  567334 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 10:37:06.193221  567334 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 10:37:06.198323  567334 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:37:06.201224  567334 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:37:06.204510  567334 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:37:06.205122  567334 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:37:06.253081  567334 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:37:06.253189  567334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:37:06.353655  567334 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:37:06.342507053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:37:06.353763  567334 docker.go:319] overlay module found
	I1123 10:37:06.356908  567334 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 10:37:06.359759  567334 start.go:309] selected driver: docker
	I1123 10:37:06.359785  567334 start.go:927] validating driver "docker" against &{Name:functional-336858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-336858 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:37:06.359953  567334 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:37:06.363586  567334 out.go:203] 
	W1123 10:37:06.366471  567334 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 10:37:06.369341  567334 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [82570423-f64c-49f5-9e58-0e177e082d33] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004333804s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-336858 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-336858 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-336858 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-336858 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [baba723a-b4b2-4916-aeb0-fee92bf87a8a] Pending
helpers_test.go:352: "sp-pod" [baba723a-b4b2-4916-aeb0-fee92bf87a8a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [baba723a-b4b2-4916-aeb0-fee92bf87a8a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.002767763s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-336858 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-336858 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-336858 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [69f6e350-71b2-4b0e-a14b-bf04f37de23d] Pending
helpers_test.go:352: "sp-pod" [69f6e350-71b2-4b0e-a14b-bf04f37de23d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003694274s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-336858 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh -n functional-336858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cp functional-336858:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd484966667/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh -n functional-336858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh -n functional-336858 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/541900/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo cat /etc/test/nested/copy/541900/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/541900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo cat /etc/ssl/certs/541900.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/541900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo cat /usr/share/ca-certificates/541900.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5419002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo cat /etc/ssl/certs/5419002.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5419002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo cat /usr/share/ca-certificates/5419002.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-336858 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 ssh "sudo systemctl is-active docker": exit status 1 (336.325061ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 ssh "sudo systemctl is-active containerd": exit status 1 (361.770098ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-336858 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-336858 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-336858 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 563974: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-336858 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-336858 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-336858 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [5327201d-c286-465b-93ae-f324c2802bc3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [5327201d-c286-465b-93ae-f324c2802bc3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002864059s
I1123 10:26:49.719120  541900 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-336858 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.39.74 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-336858 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "370.02354ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "66.121376ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "369.798323ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.695562ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdany-port2809150992/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763894214953352863" to /tmp/TestFunctionalparallelMountCmdany-port2809150992/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763894214953352863" to /tmp/TestFunctionalparallelMountCmdany-port2809150992/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763894214953352863" to /tmp/TestFunctionalparallelMountCmdany-port2809150992/001/test-1763894214953352863
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.350918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 10:36:55.309058  541900 retry.go:31] will retry after 572.358655ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 10:36 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 10:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 10:36 test-1763894214953352863
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh cat /mount-9p/test-1763894214953352863
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-336858 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [37272fba-8723-4653-af78-bc05e13ef63e] Pending
helpers_test.go:352: "busybox-mount" [37272fba-8723-4653-af78-bc05e13ef63e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [37272fba-8723-4653-af78-bc05e13ef63e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [37272fba-8723-4653-af78-bc05e13ef63e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003999507s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-336858 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdany-port2809150992/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdspecific-port412835839/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.75716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 10:37:02.326663  541900 retry.go:31] will retry after 265.605191ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdspecific-port412835839/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 ssh "sudo umount -f /mount-9p": exit status 1 (278.1769ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-336858 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdspecific-port412835839/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup68281374/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup68281374/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup68281374/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-336858 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup68281374/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup68281374/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-336858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup68281374/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 service list -o json
functional_test.go:1504: Took "561.478819ms" to run "out/minikube-linux-arm64 -p functional-336858 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-336858 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-336858 image ls --format short --alsologtostderr:
I1123 10:37:22.776853  570446 out.go:360] Setting OutFile to fd 1 ...
I1123 10:37:22.777125  570446 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:22.777156  570446 out.go:374] Setting ErrFile to fd 2...
I1123 10:37:22.777180  570446 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:22.777496  570446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
I1123 10:37:22.778131  570446 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:22.778291  570446 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:22.778849  570446 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
I1123 10:37:22.797268  570446 ssh_runner.go:195] Run: systemctl --version
I1123 10:37:22.797326  570446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
I1123 10:37:22.828850  570446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
I1123 10:37:22.936121  570446 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-336858 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-336858 image ls --format table --alsologtostderr:
I1123 10:37:23.414114  570629 out.go:360] Setting OutFile to fd 1 ...
I1123 10:37:23.414328  570629 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:23.414356  570629 out.go:374] Setting ErrFile to fd 2...
I1123 10:37:23.414375  570629 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:23.414715  570629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
I1123 10:37:23.415397  570629 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:23.415664  570629 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:23.416366  570629 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
I1123 10:37:23.441180  570629 ssh_runner.go:195] Run: systemctl --version
I1123 10:37:23.441241  570629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
I1123 10:37:23.467317  570629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
I1123 10:37:23.579816  570629 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-336858 image ls --format json --alsologtostderr:
[{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pa
use:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf
571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cf
de42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce20
6e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602
c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-336858 image ls --format json --alsologtostderr:
I1123 10:37:23.108200  570530 out.go:360] Setting OutFile to fd 1 ...
I1123 10:37:23.108557  570530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:23.108571  570530 out.go:374] Setting ErrFile to fd 2...
I1123 10:37:23.108582  570530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:23.108938  570530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
I1123 10:37:23.109618  570530 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:23.109737  570530 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:23.110253  570530 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
I1123 10:37:23.144339  570530 ssh_runner.go:195] Run: systemctl --version
I1123 10:37:23.144391  570530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
I1123 10:37:23.178211  570530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
I1123 10:37:23.298297  570530 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-336858 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-336858 image ls --format yaml --alsologtostderr:
I1123 10:37:22.819158  570455 out.go:360] Setting OutFile to fd 1 ...
I1123 10:37:22.819332  570455 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:22.819362  570455 out.go:374] Setting ErrFile to fd 2...
I1123 10:37:22.819382  570455 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:22.819731  570455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
I1123 10:37:22.820457  570455 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:22.820642  570455 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:22.821330  570455 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
I1123 10:37:22.845357  570455 ssh_runner.go:195] Run: systemctl --version
I1123 10:37:22.845437  570455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
I1123 10:37:22.866711  570455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
I1123 10:37:22.975778  570455 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-336858 ssh pgrep buildkitd: exit status 1 (356.562567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image build -t localhost/my-image:functional-336858 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-336858 image build -t localhost/my-image:functional-336858 testdata/build --alsologtostderr: (3.381384158s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-336858 image build -t localhost/my-image:functional-336858 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 90180736610
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-336858
--> 8005e4f9f71
Successfully tagged localhost/my-image:functional-336858
8005e4f9f71b7cb7480e216f14a7dafaa894bf15292d11b4e855484ef99e88d4
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-336858 image build -t localhost/my-image:functional-336858 testdata/build --alsologtostderr:
I1123 10:37:23.415395  570624 out.go:360] Setting OutFile to fd 1 ...
I1123 10:37:23.416027  570624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:23.416050  570624 out.go:374] Setting ErrFile to fd 2...
I1123 10:37:23.416057  570624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:37:23.416320  570624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
I1123 10:37:23.416946  570624 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:23.417627  570624 config.go:182] Loaded profile config "functional-336858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 10:37:23.418202  570624 cli_runner.go:164] Run: docker container inspect functional-336858 --format={{.State.Status}}
I1123 10:37:23.449015  570624 ssh_runner.go:195] Run: systemctl --version
I1123 10:37:23.449071  570624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-336858
I1123 10:37:23.483790  570624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/functional-336858/id_rsa Username:docker}
I1123 10:37:23.592179  570624 build_images.go:162] Building image from path: /tmp/build.1019621225.tar
I1123 10:37:23.592252  570624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 10:37:23.602320  570624 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1019621225.tar
I1123 10:37:23.607798  570624 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1019621225.tar: stat -c "%s %y" /var/lib/minikube/build/build.1019621225.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1019621225.tar': No such file or directory
I1123 10:37:23.607826  570624 ssh_runner.go:362] scp /tmp/build.1019621225.tar --> /var/lib/minikube/build/build.1019621225.tar (3072 bytes)
I1123 10:37:23.632642  570624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1019621225
I1123 10:37:23.640586  570624 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1019621225 -xf /var/lib/minikube/build/build.1019621225.tar
I1123 10:37:23.648881  570624 crio.go:315] Building image: /var/lib/minikube/build/build.1019621225
I1123 10:37:23.648971  570624 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-336858 /var/lib/minikube/build/build.1019621225 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1123 10:37:26.688981  570624 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-336858 /var/lib/minikube/build/build.1019621225 --cgroup-manager=cgroupfs: (3.039979142s)
I1123 10:37:26.689056  570624 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1019621225
I1123 10:37:26.696682  570624 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1019621225.tar
I1123 10:37:26.703896  570624 build_images.go:218] Built localhost/my-image:functional-336858 from /tmp/build.1019621225.tar
I1123 10:37:26.703928  570624 build_images.go:134] succeeded building to: functional-336858
I1123 10:37:26.703935  570624 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-336858
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image rm kicbase/echo-server:functional-336858 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-336858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-336858
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-336858
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-336858
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1123 10:39:38.857306  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m26.304684887s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 kubectl -- rollout status deployment/busybox: (4.049793461s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-6x4wd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-j8tbv -- nslookup kubernetes.io
E1123 10:41:01.925539  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-z9kcv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-6x4wd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-j8tbv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-z9kcv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-6x4wd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-j8tbv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-z9kcv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-6x4wd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-6x4wd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-j8tbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-j8tbv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-z9kcv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 kubectl -- exec busybox-7b57f96db7-z9kcv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 node add --alsologtostderr -v 5
E1123 10:41:40.153268  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:40.159828  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:40.171848  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:40.193237  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:40.234678  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:40.316193  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:40.477666  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:40.799445  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:41.441517  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:42.723714  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:45.285894  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:41:50.407364  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:42:00.649669  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 node add --alsologtostderr -v 5: (58.259466703s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5: (1.106108749s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-448902 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.09169758s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 status --output json --alsologtostderr -v 5: (1.040236087s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp testdata/cp-test.txt ha-448902:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2989952912/001/cp-test_ha-448902.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902:/home/docker/cp-test.txt ha-448902-m02:/home/docker/cp-test_ha-448902_ha-448902-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test_ha-448902_ha-448902-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902:/home/docker/cp-test.txt ha-448902-m03:/home/docker/cp-test_ha-448902_ha-448902-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test_ha-448902_ha-448902-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902:/home/docker/cp-test.txt ha-448902-m04:/home/docker/cp-test_ha-448902_ha-448902-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test_ha-448902_ha-448902-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp testdata/cp-test.txt ha-448902-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2989952912/001/cp-test_ha-448902-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m02:/home/docker/cp-test.txt ha-448902:/home/docker/cp-test_ha-448902-m02_ha-448902.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test_ha-448902-m02_ha-448902.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m02:/home/docker/cp-test.txt ha-448902-m03:/home/docker/cp-test_ha-448902-m02_ha-448902-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test_ha-448902-m02_ha-448902-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m02:/home/docker/cp-test.txt ha-448902-m04:/home/docker/cp-test_ha-448902-m02_ha-448902-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test_ha-448902-m02_ha-448902-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp testdata/cp-test.txt ha-448902-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2989952912/001/cp-test_ha-448902-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m03:/home/docker/cp-test.txt ha-448902:/home/docker/cp-test_ha-448902-m03_ha-448902.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test_ha-448902-m03_ha-448902.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m03:/home/docker/cp-test.txt ha-448902-m02:/home/docker/cp-test_ha-448902-m03_ha-448902-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test_ha-448902-m03_ha-448902-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m03:/home/docker/cp-test.txt ha-448902-m04:/home/docker/cp-test_ha-448902-m03_ha-448902-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test_ha-448902-m03_ha-448902-m04.txt"
E1123 10:42:21.131364  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp testdata/cp-test.txt ha-448902-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2989952912/001/cp-test_ha-448902-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m04:/home/docker/cp-test.txt ha-448902:/home/docker/cp-test_ha-448902-m04_ha-448902.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902 "sudo cat /home/docker/cp-test_ha-448902-m04_ha-448902.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m04:/home/docker/cp-test.txt ha-448902-m02:/home/docker/cp-test_ha-448902-m04_ha-448902-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m02 "sudo cat /home/docker/cp-test_ha-448902-m04_ha-448902-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 cp ha-448902-m04:/home/docker/cp-test.txt ha-448902-m03:/home/docker/cp-test_ha-448902-m04_ha-448902-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 ssh -n ha-448902-m03 "sudo cat /home/docker/cp-test_ha-448902-m04_ha-448902-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 node stop m02 --alsologtostderr -v 5: (12.086306562s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5: exit status 7 (790.468567ms)

                                                
                                                
-- stdout --
	ha-448902
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-448902-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-448902-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-448902-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:42:38.323399  585486 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:42:38.323592  585486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:42:38.323606  585486 out.go:374] Setting ErrFile to fd 2...
	I1123 10:42:38.323634  585486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:42:38.324010  585486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:42:38.324250  585486 out.go:368] Setting JSON to false
	I1123 10:42:38.324299  585486 mustload.go:66] Loading cluster: ha-448902
	I1123 10:42:38.324394  585486 notify.go:221] Checking for updates...
	I1123 10:42:38.325225  585486 config.go:182] Loaded profile config "ha-448902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:42:38.325245  585486 status.go:174] checking status of ha-448902 ...
	I1123 10:42:38.325854  585486 cli_runner.go:164] Run: docker container inspect ha-448902 --format={{.State.Status}}
	I1123 10:42:38.347964  585486 status.go:371] ha-448902 host status = "Running" (err=<nil>)
	I1123 10:42:38.348015  585486 host.go:66] Checking if "ha-448902" exists ...
	I1123 10:42:38.348419  585486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-448902
	I1123 10:42:38.379184  585486 host.go:66] Checking if "ha-448902" exists ...
	I1123 10:42:38.379481  585486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:42:38.379581  585486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-448902
	I1123 10:42:38.399911  585486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33526 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/ha-448902/id_rsa Username:docker}
	I1123 10:42:38.507142  585486 ssh_runner.go:195] Run: systemctl --version
	I1123 10:42:38.513639  585486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:42:38.527593  585486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:42:38.586269  585486 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-23 10:42:38.576710661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:42:38.586826  585486 kubeconfig.go:125] found "ha-448902" server: "https://192.168.49.254:8443"
	I1123 10:42:38.586861  585486 api_server.go:166] Checking apiserver status ...
	I1123 10:42:38.586910  585486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:42:38.602312  585486 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup
	I1123 10:42:38.611126  585486 api_server.go:182] apiserver freezer: "10:freezer:/docker/e0ca9baf34173334609266e49b4d6fc0fae3401d79b18c48c09f50dd94039a4e/crio/crio-95fca733d4651c8860724e58b1412fd5757956ef1618e4fecad7e89eefb7b3d8"
	I1123 10:42:38.611202  585486 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e0ca9baf34173334609266e49b4d6fc0fae3401d79b18c48c09f50dd94039a4e/crio/crio-95fca733d4651c8860724e58b1412fd5757956ef1618e4fecad7e89eefb7b3d8/freezer.state
	I1123 10:42:38.619043  585486 api_server.go:204] freezer state: "THAWED"
	I1123 10:42:38.619074  585486 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 10:42:38.627716  585486 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 10:42:38.627747  585486 status.go:463] ha-448902 apiserver status = Running (err=<nil>)
	I1123 10:42:38.627759  585486 status.go:176] ha-448902 status: &{Name:ha-448902 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:42:38.627781  585486 status.go:174] checking status of ha-448902-m02 ...
	I1123 10:42:38.628109  585486 cli_runner.go:164] Run: docker container inspect ha-448902-m02 --format={{.State.Status}}
	I1123 10:42:38.645507  585486 status.go:371] ha-448902-m02 host status = "Stopped" (err=<nil>)
	I1123 10:42:38.645535  585486 status.go:384] host is not running, skipping remaining checks
	I1123 10:42:38.645543  585486 status.go:176] ha-448902-m02 status: &{Name:ha-448902-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:42:38.645564  585486 status.go:174] checking status of ha-448902-m03 ...
	I1123 10:42:38.645930  585486 cli_runner.go:164] Run: docker container inspect ha-448902-m03 --format={{.State.Status}}
	I1123 10:42:38.662804  585486 status.go:371] ha-448902-m03 host status = "Running" (err=<nil>)
	I1123 10:42:38.662826  585486 host.go:66] Checking if "ha-448902-m03" exists ...
	I1123 10:42:38.663119  585486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-448902-m03
	I1123 10:42:38.696572  585486 host.go:66] Checking if "ha-448902-m03" exists ...
	I1123 10:42:38.696906  585486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:42:38.696946  585486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-448902-m03
	I1123 10:42:38.715479  585486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/ha-448902-m03/id_rsa Username:docker}
	I1123 10:42:38.818806  585486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:42:38.833607  585486 kubeconfig.go:125] found "ha-448902" server: "https://192.168.49.254:8443"
	I1123 10:42:38.833634  585486 api_server.go:166] Checking apiserver status ...
	I1123 10:42:38.833676  585486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:42:38.845388  585486 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	I1123 10:42:38.854207  585486 api_server.go:182] apiserver freezer: "10:freezer:/docker/739725b8358ae585f16a7126952ddfeb405e44e7b2193bd87c3c1fdbb073e18e/crio/crio-bde134eea65398f6d848e97d748340a981e7a1f1fa99dd63a11a98b2f41ea0fc"
	I1123 10:42:38.854284  585486 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/739725b8358ae585f16a7126952ddfeb405e44e7b2193bd87c3c1fdbb073e18e/crio/crio-bde134eea65398f6d848e97d748340a981e7a1f1fa99dd63a11a98b2f41ea0fc/freezer.state
	I1123 10:42:38.862252  585486 api_server.go:204] freezer state: "THAWED"
	I1123 10:42:38.862281  585486 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 10:42:38.873068  585486 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 10:42:38.873098  585486 status.go:463] ha-448902-m03 apiserver status = Running (err=<nil>)
	I1123 10:42:38.873107  585486 status.go:176] ha-448902-m03 status: &{Name:ha-448902-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:42:38.873124  585486 status.go:174] checking status of ha-448902-m04 ...
	I1123 10:42:38.873477  585486 cli_runner.go:164] Run: docker container inspect ha-448902-m04 --format={{.State.Status}}
	I1123 10:42:38.892422  585486 status.go:371] ha-448902-m04 host status = "Running" (err=<nil>)
	I1123 10:42:38.892444  585486 host.go:66] Checking if "ha-448902-m04" exists ...
	I1123 10:42:38.892747  585486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-448902-m04
	I1123 10:42:38.912753  585486 host.go:66] Checking if "ha-448902-m04" exists ...
	I1123 10:42:38.913054  585486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:42:38.913105  585486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-448902-m04
	I1123 10:42:38.930596  585486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33541 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/ha-448902-m04/id_rsa Username:docker}
	I1123 10:42:39.039123  585486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:42:39.053107  585486 status.go:176] ha-448902-m04 status: &{Name:ha-448902-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 node start m02 --alsologtostderr -v 5
E1123 10:43:02.093573  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 node start m02 --alsologtostderr -v 5: (31.951764565s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5: (1.262385497s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.175460918s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 stop --alsologtostderr -v 5: (27.737969327s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 start --wait true --alsologtostderr -v 5
E1123 10:44:24.014998  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:44:38.856425  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 start --wait true --alsologtostderr -v 5: (1m14.66388588s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 node delete m03 --alsologtostderr -v 5: (10.640525827s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 stop --alsologtostderr -v 5: (36.046112383s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5: exit status 7 (119.156495ms)

                                                
                                                
-- stdout --
	ha-448902
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-448902-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-448902-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:45:45.508402  597173 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:45:45.508537  597173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:45:45.508547  597173 out.go:374] Setting ErrFile to fd 2...
	I1123 10:45:45.508553  597173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:45:45.508787  597173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:45:45.508964  597173 out.go:368] Setting JSON to false
	I1123 10:45:45.508995  597173 mustload.go:66] Loading cluster: ha-448902
	I1123 10:45:45.509103  597173 notify.go:221] Checking for updates...
	I1123 10:45:45.509479  597173 config.go:182] Loaded profile config "ha-448902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:45:45.509520  597173 status.go:174] checking status of ha-448902 ...
	I1123 10:45:45.510085  597173 cli_runner.go:164] Run: docker container inspect ha-448902 --format={{.State.Status}}
	I1123 10:45:45.528207  597173 status.go:371] ha-448902 host status = "Stopped" (err=<nil>)
	I1123 10:45:45.528232  597173 status.go:384] host is not running, skipping remaining checks
	I1123 10:45:45.528240  597173 status.go:176] ha-448902 status: &{Name:ha-448902 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:45:45.528269  597173 status.go:174] checking status of ha-448902-m02 ...
	I1123 10:45:45.528590  597173 cli_runner.go:164] Run: docker container inspect ha-448902-m02 --format={{.State.Status}}
	I1123 10:45:45.560148  597173 status.go:371] ha-448902-m02 host status = "Stopped" (err=<nil>)
	I1123 10:45:45.560174  597173 status.go:384] host is not running, skipping remaining checks
	I1123 10:45:45.560181  597173 status.go:176] ha-448902-m02 status: &{Name:ha-448902-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:45:45.560201  597173 status.go:174] checking status of ha-448902-m04 ...
	I1123 10:45:45.560500  597173 cli_runner.go:164] Run: docker container inspect ha-448902-m04 --format={{.State.Status}}
	I1123 10:45:45.577199  597173 status.go:371] ha-448902-m04 host status = "Stopped" (err=<nil>)
	I1123 10:45:45.577217  597173 status.go:384] host is not running, skipping remaining checks
	I1123 10:45:45.577223  597173 status.go:176] ha-448902-m04 status: &{Name:ha-448902-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (89.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1123 10:46:40.153596  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:47:07.857678  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m28.36397501s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (89.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 node add --control-plane --alsologtostderr -v 5: (1m23.033632532s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-448902 status --alsologtostderr -v 5: (1.046321911s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.122678136s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-577713 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1123 10:49:38.864040  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-577713 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.944938472s)
--- PASS: TestJSONOutput/start/Command (80.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-577713 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-577713 --output=json --user=testUser: (5.905545851s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-680464 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-680464 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.896482ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3485453d-99fc-4dd5-b55e-55d2989c9d7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-680464] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"29e73b84-dafd-465e-828f-601dd0ee9f97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"e461cdfc-2409-4fc3-83df-8cf500e189e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c3605507-8cd5-4718-8640-a63536800f6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig"}}
	{"specversion":"1.0","id":"cb51d67c-17cf-4fe8-98eb-bd7d0ab82648","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube"}}
	{"specversion":"1.0","id":"e2f0471a-e0f6-428b-b3ba-99bb5c852bcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"488deb06-f22f-4f2d-b033-a65ec1cff3eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"82e6f195-12a7-4fff-9681-c6022658a52c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-680464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-680464
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-096073 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-096073 --network=: (38.129304487s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-096073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-096073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-096073: (2.208684555s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-519269 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-519269 --network=bridge: (32.899574838s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-519269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-519269
E1123 10:51:40.153119  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-519269: (2.069568737s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.99s)

                                                
                                    
x
+
TestKicExistingNetwork (33.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 10:51:41.353791  541900 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 10:51:41.369276  541900 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 10:51:41.369355  541900 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 10:51:41.369374  541900 cli_runner.go:164] Run: docker network inspect existing-network
W1123 10:51:41.385820  541900 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 10:51:41.385849  541900 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 10:51:41.385864  541900 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 10:51:41.385969  541900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 10:51:41.403535  541900 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1ee546d11dd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:83:d3:66:68:f5} reservation:<nil>}
I1123 10:51:41.403901  541900 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c29090}
I1123 10:51:41.403928  541900 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 10:51:41.403979  541900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 10:51:41.476931  541900 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-964254 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-964254 --network=existing-network: (31.327595533s)
helpers_test.go:175: Cleaning up "existing-network-964254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-964254
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-964254: (2.063210434s)
I1123 10:52:14.883943  541900 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.55s)

                                                
                                    
x
+
TestKicCustomSubnet (36.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-383009 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-383009 --subnet=192.168.60.0/24: (34.302266337s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-383009 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-383009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-383009
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-383009: (2.181455638s)
--- PASS: TestKicCustomSubnet (36.51s)

                                                
                                    
x
+
TestKicStaticIP (35.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-136078 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-136078 --static-ip=192.168.200.200: (33.217287071s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-136078 ip
helpers_test.go:175: Cleaning up "static-ip-136078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-136078
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-136078: (2.141763004s)
--- PASS: TestKicStaticIP (35.53s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-987450 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-987450 --driver=docker  --container-runtime=crio: (32.449718124s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-990207 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-990207 --driver=docker  --container-runtime=crio: (36.245036515s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-987450
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-990207
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-990207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-990207
E1123 10:54:38.856749  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-990207: (2.117400005s)
helpers_test.go:175: Cleaning up "first-987450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-987450
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-987450: (2.023705642s)
--- PASS: TestMinikubeProfile (74.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-737106 --memory=3072 --mount-string /tmp/TestMountStartserial2034122027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-737106 --memory=3072 --mount-string /tmp/TestMountStartserial2034122027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.736011298s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-737106 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-738747 --memory=3072 --mount-string /tmp/TestMountStartserial2034122027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-738747 --memory=3072 --mount-string /tmp/TestMountStartserial2034122027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.567405345s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-738747 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-737106 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-737106 --alsologtostderr -v=5: (1.729504092s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-738747 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-738747
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-738747: (1.298888474s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-738747
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-738747: (7.412131741s)
--- PASS: TestMountStart/serial/RestartStopped (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-738747 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-890072 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1123 10:56:40.153026  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-890072 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.52501537s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-890072 -- rollout status deployment/busybox: (3.255674835s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-lws6h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-wmm9x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-lws6h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-wmm9x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-lws6h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-wmm9x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-lws6h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-lws6h -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-wmm9x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-890072 -- exec busybox-7b57f96db7-wmm9x -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-890072 -v=5 --alsologtostderr
E1123 10:57:41.928154  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:58:03.219343  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-890072 -v=5 --alsologtostderr: (55.915442254s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-890072 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp testdata/cp-test.txt multinode-890072:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2160191645/001/cp-test_multinode-890072.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072:/home/docker/cp-test.txt multinode-890072-m02:/home/docker/cp-test_multinode-890072_multinode-890072-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m02 "sudo cat /home/docker/cp-test_multinode-890072_multinode-890072-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072:/home/docker/cp-test.txt multinode-890072-m03:/home/docker/cp-test_multinode-890072_multinode-890072-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m03 "sudo cat /home/docker/cp-test_multinode-890072_multinode-890072-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp testdata/cp-test.txt multinode-890072-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2160191645/001/cp-test_multinode-890072-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072-m02:/home/docker/cp-test.txt multinode-890072:/home/docker/cp-test_multinode-890072-m02_multinode-890072.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072 "sudo cat /home/docker/cp-test_multinode-890072-m02_multinode-890072.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072-m02:/home/docker/cp-test.txt multinode-890072-m03:/home/docker/cp-test_multinode-890072-m02_multinode-890072-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m03 "sudo cat /home/docker/cp-test_multinode-890072-m02_multinode-890072-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp testdata/cp-test.txt multinode-890072-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2160191645/001/cp-test_multinode-890072-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072-m03:/home/docker/cp-test.txt multinode-890072:/home/docker/cp-test_multinode-890072-m03_multinode-890072.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072 "sudo cat /home/docker/cp-test_multinode-890072-m03_multinode-890072.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 cp multinode-890072-m03:/home/docker/cp-test.txt multinode-890072-m02:/home/docker/cp-test_multinode-890072-m03_multinode-890072-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 ssh -n multinode-890072-m02 "sudo cat /home/docker/cp-test_multinode-890072-m03_multinode-890072-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-890072 node stop m03: (1.370290798s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-890072 status: exit status 7 (522.419598ms)

                                                
                                                
-- stdout --
	multinode-890072
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-890072-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-890072-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr: exit status 7 (538.936978ms)

                                                
                                                
-- stdout --
	multinode-890072
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-890072-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-890072-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:58:47.130596  647470 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:58:47.130769  647470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:58:47.130783  647470 out.go:374] Setting ErrFile to fd 2...
	I1123 10:58:47.130788  647470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:58:47.131038  647470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 10:58:47.131205  647470 out.go:368] Setting JSON to false
	I1123 10:58:47.131237  647470 mustload.go:66] Loading cluster: multinode-890072
	I1123 10:58:47.131343  647470 notify.go:221] Checking for updates...
	I1123 10:58:47.131667  647470 config.go:182] Loaded profile config "multinode-890072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:58:47.131686  647470 status.go:174] checking status of multinode-890072 ...
	I1123 10:58:47.132206  647470 cli_runner.go:164] Run: docker container inspect multinode-890072 --format={{.State.Status}}
	I1123 10:58:47.154745  647470 status.go:371] multinode-890072 host status = "Running" (err=<nil>)
	I1123 10:58:47.154771  647470 host.go:66] Checking if "multinode-890072" exists ...
	I1123 10:58:47.155122  647470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-890072
	I1123 10:58:47.187270  647470 host.go:66] Checking if "multinode-890072" exists ...
	I1123 10:58:47.187587  647470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:58:47.188343  647470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-890072
	I1123 10:58:47.208100  647470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33646 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/multinode-890072/id_rsa Username:docker}
	I1123 10:58:47.310940  647470 ssh_runner.go:195] Run: systemctl --version
	I1123 10:58:47.317509  647470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:58:47.330168  647470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:58:47.391145  647470 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 10:58:47.381757376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:58:47.391750  647470 kubeconfig.go:125] found "multinode-890072" server: "https://192.168.67.2:8443"
	I1123 10:58:47.391789  647470 api_server.go:166] Checking apiserver status ...
	I1123 10:58:47.391835  647470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:58:47.403193  647470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	I1123 10:58:47.411760  647470 api_server.go:182] apiserver freezer: "10:freezer:/docker/9a27438d1712a80fbfe7b250c69b58da50e24d69e07d86903af0bbea018c962e/crio/crio-39749c7edc8dda4f15d9fac902468099c818dc9989e952ace0935b58e8178c6e"
	I1123 10:58:47.411856  647470 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9a27438d1712a80fbfe7b250c69b58da50e24d69e07d86903af0bbea018c962e/crio/crio-39749c7edc8dda4f15d9fac902468099c818dc9989e952ace0935b58e8178c6e/freezer.state
	I1123 10:58:47.419334  647470 api_server.go:204] freezer state: "THAWED"
	I1123 10:58:47.419363  647470 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 10:58:47.427669  647470 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 10:58:47.427699  647470 status.go:463] multinode-890072 apiserver status = Running (err=<nil>)
	I1123 10:58:47.427709  647470 status.go:176] multinode-890072 status: &{Name:multinode-890072 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:58:47.427728  647470 status.go:174] checking status of multinode-890072-m02 ...
	I1123 10:58:47.428047  647470 cli_runner.go:164] Run: docker container inspect multinode-890072-m02 --format={{.State.Status}}
	I1123 10:58:47.445010  647470 status.go:371] multinode-890072-m02 host status = "Running" (err=<nil>)
	I1123 10:58:47.445037  647470 host.go:66] Checking if "multinode-890072-m02" exists ...
	I1123 10:58:47.445366  647470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-890072-m02
	I1123 10:58:47.463347  647470 host.go:66] Checking if "multinode-890072-m02" exists ...
	I1123 10:58:47.463647  647470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:58:47.463697  647470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-890072-m02
	I1123 10:58:47.481963  647470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33651 SSHKeyPath:/home/jenkins/minikube-integration/21968-540037/.minikube/machines/multinode-890072-m02/id_rsa Username:docker}
	I1123 10:58:47.586636  647470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:58:47.598514  647470 status.go:176] multinode-890072-m02 status: &{Name:multinode-890072-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:58:47.598549  647470 status.go:174] checking status of multinode-890072-m03 ...
	I1123 10:58:47.598847  647470 cli_runner.go:164] Run: docker container inspect multinode-890072-m03 --format={{.State.Status}}
	I1123 10:58:47.618394  647470 status.go:371] multinode-890072-m03 host status = "Stopped" (err=<nil>)
	I1123 10:58:47.618429  647470 status.go:384] host is not running, skipping remaining checks
	I1123 10:58:47.618436  647470 status.go:176] multinode-890072-m03 status: &{Name:multinode-890072-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-890072 node start m03 -v=5 --alsologtostderr: (7.470367505s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-890072
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-890072
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-890072: (25.056584577s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-890072 --wait=true -v=5 --alsologtostderr
E1123 10:59:38.857267  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-890072 --wait=true -v=5 --alsologtostderr: (52.131725485s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-890072
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-890072 node delete m03: (5.015044537s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-890072 stop: (23.850237408s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-890072 status: exit status 7 (99.2774ms)

                                                
                                                
-- stdout --
	multinode-890072
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-890072-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr: exit status 7 (95.386765ms)

                                                
                                                
-- stdout --
	multinode-890072
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-890072-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:00:42.949570  655365 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:00:42.949744  655365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:00:42.949761  655365 out.go:374] Setting ErrFile to fd 2...
	I1123 11:00:42.949766  655365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:00:42.950139  655365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:00:42.950374  655365 out.go:368] Setting JSON to false
	I1123 11:00:42.950404  655365 mustload.go:66] Loading cluster: multinode-890072
	I1123 11:00:42.951100  655365 config.go:182] Loaded profile config "multinode-890072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:00:42.951119  655365 status.go:174] checking status of multinode-890072 ...
	I1123 11:00:42.951843  655365 cli_runner.go:164] Run: docker container inspect multinode-890072 --format={{.State.Status}}
	I1123 11:00:42.952223  655365 notify.go:221] Checking for updates...
	I1123 11:00:42.972745  655365 status.go:371] multinode-890072 host status = "Stopped" (err=<nil>)
	I1123 11:00:42.972783  655365 status.go:384] host is not running, skipping remaining checks
	I1123 11:00:42.972791  655365 status.go:176] multinode-890072 status: &{Name:multinode-890072 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 11:00:42.972820  655365 status.go:174] checking status of multinode-890072-m02 ...
	I1123 11:00:42.973120  655365 cli_runner.go:164] Run: docker container inspect multinode-890072-m02 --format={{.State.Status}}
	I1123 11:00:42.993141  655365 status.go:371] multinode-890072-m02 host status = "Stopped" (err=<nil>)
	I1123 11:00:42.993170  655365 status.go:384] host is not running, skipping remaining checks
	I1123 11:00:42.993178  655365 status.go:176] multinode-890072-m02 status: &{Name:multinode-890072-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-890072 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-890072 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (55.898146321s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-890072 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-890072
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-890072-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-890072-m02 --driver=docker  --container-runtime=crio: exit status 14 (111.71606ms)

                                                
                                                
-- stdout --
	* [multinode-890072-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-890072-m02' is duplicated with machine name 'multinode-890072-m02' in profile 'multinode-890072'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-890072-m03 --driver=docker  --container-runtime=crio
E1123 11:01:40.153762  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-890072-m03 --driver=docker  --container-runtime=crio: (35.118165532s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-890072
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-890072: exit status 80 (395.599599ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-890072 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-890072-m03 already exists in multinode-890072-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-890072-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-890072-m03: (2.069265642s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.74s)

                                                
                                    
x
+
TestPreload (122.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-451645 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-451645 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (59.930989327s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-451645 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-451645 image pull gcr.io/k8s-minikube/busybox: (2.36322746s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-451645
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-451645: (5.939442816s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-451645 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-451645 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.207418977s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-451645 image list
helpers_test.go:175: Cleaning up "test-preload-451645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-451645
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-451645: (2.491762107s)
--- PASS: TestPreload (122.18s)

                                                
                                    
x
+
TestScheduledStopUnix (108.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-072914 --memory=3072 --driver=docker  --container-runtime=crio
E1123 11:04:38.857567  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-072914 --memory=3072 --driver=docker  --container-runtime=crio: (32.00250264s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072914 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 11:04:55.973282  669403 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:04:55.973503  669403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:04:55.973536  669403 out.go:374] Setting ErrFile to fd 2...
	I1123 11:04:55.973567  669403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:04:55.973845  669403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:04:55.974131  669403 out.go:368] Setting JSON to false
	I1123 11:04:55.974282  669403 mustload.go:66] Loading cluster: scheduled-stop-072914
	I1123 11:04:55.974685  669403 config.go:182] Loaded profile config "scheduled-stop-072914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:04:55.974800  669403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/config.json ...
	I1123 11:04:55.975019  669403 mustload.go:66] Loading cluster: scheduled-stop-072914
	I1123 11:04:55.975174  669403 config.go:182] Loaded profile config "scheduled-stop-072914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-072914 -n scheduled-stop-072914
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072914 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 11:04:56.424395  669492 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:04:56.424526  669492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:04:56.424541  669492 out.go:374] Setting ErrFile to fd 2...
	I1123 11:04:56.424546  669492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:04:56.425753  669492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:04:56.426211  669492 out.go:368] Setting JSON to false
	I1123 11:04:56.427164  669492 daemonize_unix.go:73] killing process 669425 as it is an old scheduled stop
	I1123 11:04:56.427441  669492 mustload.go:66] Loading cluster: scheduled-stop-072914
	I1123 11:04:56.427898  669492 config.go:182] Loaded profile config "scheduled-stop-072914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:04:56.428020  669492 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/config.json ...
	I1123 11:04:56.428244  669492 mustload.go:66] Loading cluster: scheduled-stop-072914
	I1123 11:04:56.428413  669492 config.go:182] Loaded profile config "scheduled-stop-072914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 11:04:56.434596  541900 retry.go:31] will retry after 53.742µs: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.434807  541900 retry.go:31] will retry after 81.981µs: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.435931  541900 retry.go:31] will retry after 250.538µs: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.437072  541900 retry.go:31] will retry after 211.163µs: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.438180  541900 retry.go:31] will retry after 321.369µs: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.439310  541900 retry.go:31] will retry after 712.708µs: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.440399  541900 retry.go:31] will retry after 1.632481ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.442598  541900 retry.go:31] will retry after 1.335141ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.444802  541900 retry.go:31] will retry after 3.679182ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.449081  541900 retry.go:31] will retry after 5.056026ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.454243  541900 retry.go:31] will retry after 2.978846ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.457534  541900 retry.go:31] will retry after 8.280029ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.466555  541900 retry.go:31] will retry after 18.911758ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.485794  541900 retry.go:31] will retry after 29.080359ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.514998  541900 retry.go:31] will retry after 17.391817ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
I1123 11:04:56.533271  541900 retry.go:31] will retry after 35.092032ms: open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072914 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-072914 -n scheduled-stop-072914
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-072914
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072914 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 11:05:22.386247  669861 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:05:22.386982  669861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:05:22.387029  669861 out.go:374] Setting ErrFile to fd 2...
	I1123 11:05:22.387050  669861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:05:22.387346  669861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:05:22.387671  669861 out.go:368] Setting JSON to false
	I1123 11:05:22.387829  669861 mustload.go:66] Loading cluster: scheduled-stop-072914
	I1123 11:05:22.388257  669861 config.go:182] Loaded profile config "scheduled-stop-072914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:05:22.388354  669861 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/scheduled-stop-072914/config.json ...
	I1123 11:05:22.388581  669861 mustload.go:66] Loading cluster: scheduled-stop-072914
	I1123 11:05:22.388751  669861 config.go:182] Loaded profile config "scheduled-stop-072914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-072914
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-072914: exit status 7 (67.493962ms)

                                                
                                                
-- stdout --
	scheduled-stop-072914
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-072914 -n scheduled-stop-072914
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-072914 -n scheduled-stop-072914: exit status 7 (65.460256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-072914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-072914
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-072914: (5.036448998s)
--- PASS: TestScheduledStopUnix (108.66s)

                                                
                                    
x
+
TestInsufficientStorage (13.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-588027 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-588027 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.649788753s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70f28c32-e516-4ee8-b4e7-bf06c7efaf05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-588027] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e42d4d9c-3d2c-4b71-8e41-b3f3820cd08a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"f941c590-fd13-4d3a-b743-f26e19076c11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14f70a71-9de9-402f-b8de-b683d7b5414c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig"}}
	{"specversion":"1.0","id":"67ff1a47-e305-4989-a30d-8e8b3624acc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube"}}
	{"specversion":"1.0","id":"7dd240b9-b921-4163-a9e8-56774dcc0c6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4512aca7-792f-4ad6-a7f0-5fadb8bda083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b766e528-26a2-4c77-bb58-f8fca4850d03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"76857067-0218-47a5-bcd3-e3f4b0ea2ce2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"51b0937c-4c79-4c64-819c-0e2a4ccce129","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d916482-61b7-4258-974a-25db810ea9a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"921bea06-c77d-422f-be1c-69a60c84661d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-588027\" primary control-plane node in \"insufficient-storage-588027\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd5088d0-2eb1-4f33-8136-54cc1d0bf8c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d38678b-8e74-4a9c-98ba-9aac9ee52f89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b599411-6ce2-43d0-bb52-8b475d8e2015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-588027 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-588027 --output=json --layout=cluster: exit status 7 (303.394695ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-588027","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-588027","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 11:06:23.509924  671562 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-588027" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-588027 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-588027 --output=json --layout=cluster: exit status 7 (302.885307ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-588027","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-588027","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 11:06:23.812201  671627 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-588027" does not appear in /home/jenkins/minikube-integration/21968-540037/kubeconfig
	E1123 11:06:23.822151  671627 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/insufficient-storage-588027/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-588027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-588027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-588027: (1.966608864s)
--- PASS: TestInsufficientStorage (13.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3272464075 start -p running-upgrade-535796 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1123 11:09:38.856987  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3272464075 start -p running-upgrade-535796 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.909685464s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-535796 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-535796 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.47275629s)
helpers_test.go:175: Cleaning up "running-upgrade-535796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-535796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-535796: (2.005924697s)
--- PASS: TestRunningBinaryUpgrade (66.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (112.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.590656434s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-297018
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-297018: (1.465689794s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-297018 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-297018 status --format={{.Host}}: exit status 7 (94.485431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.990657785s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-297018 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (122.370548ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-297018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-297018
	    minikube start -p kubernetes-upgrade-297018 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2970182 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-297018 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-297018 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.569449313s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-297018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-297018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-297018: (2.573374783s)
--- PASS: TestKubernetesUpgrade (112.58s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2883290726 start -p missing-upgrade-994793 --memory=3072 --driver=docker  --container-runtime=crio
E1123 11:06:40.152786  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2883290726 start -p missing-upgrade-994793 --memory=3072 --driver=docker  --container-runtime=crio: (1m7.420383544s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-994793
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-994793: (1.114976183s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-994793
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-994793 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-994793 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.360283251s)
helpers_test.go:175: Cleaning up "missing-upgrade-994793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-994793
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-994793: (2.245770067s)
--- PASS: TestMissingContainerUpgrade (116.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350243 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-350243 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (93.235645ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-350243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350243 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350243 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.584242262s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-350243 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.007271521s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-350243 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-350243 status -o json: exit status 2 (324.241496ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-350243","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-350243
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-350243: (2.773882065s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.580162076s)
--- PASS: TestNoKubernetes/serial/Start (9.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21968-540037/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-350243 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-350243 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.699171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-350243
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-350243: (1.28205973s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350243 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350243 --driver=docker  --container-runtime=crio: (9.971366089s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-350243 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-350243 "sudo systemctl is-active --quiet service kubelet": exit status 1 (477.710116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1071760412 start -p stopped-upgrade-209526 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1071760412 start -p stopped-upgrade-209526 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.201252921s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1071760412 -p stopped-upgrade-209526 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1071760412 -p stopped-upgrade-209526 stop: (1.255188714s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-209526 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-209526 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.165474478s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-209526
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-209526: (1.213089511s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (87.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-851396 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-851396 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m27.402866293s)
--- PASS: TestPause/serial/Start (87.40s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (19.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-851396 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-851396 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.766403366s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (19.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-344709 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-344709 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (201.493261ms)

                                                
                                                
-- stdout --
	* [false-344709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 11:11:20.023550  703344 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:11:20.023788  703344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:20.023822  703344 out.go:374] Setting ErrFile to fd 2...
	I1123 11:11:20.023843  703344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:11:20.024139  703344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-540037/.minikube/bin
	I1123 11:11:20.024655  703344 out.go:368] Setting JSON to false
	I1123 11:11:20.025731  703344 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14029,"bootTime":1763882251,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 11:11:20.025857  703344 start.go:143] virtualization:  
	I1123 11:11:20.029582  703344 out.go:179] * [false-344709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:11:20.032916  703344 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:11:20.032994  703344 notify.go:221] Checking for updates...
	I1123 11:11:20.039703  703344 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:11:20.042748  703344 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-540037/kubeconfig
	I1123 11:11:20.045790  703344 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-540037/.minikube
	I1123 11:11:20.048768  703344 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:11:20.051681  703344 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 11:11:20.055285  703344 config.go:182] Loaded profile config "pause-851396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 11:11:20.055387  703344 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:11:20.088221  703344 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:11:20.088354  703344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:11:20.146326  703344 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 11:11:20.13646601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:11:20.146433  703344 docker.go:319] overlay module found
	I1123 11:11:20.149583  703344 out.go:179] * Using the docker driver based on user configuration
	I1123 11:11:20.152462  703344 start.go:309] selected driver: docker
	I1123 11:11:20.152484  703344 start.go:927] validating driver "docker" against <nil>
	I1123 11:11:20.152499  703344 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:11:20.156064  703344 out.go:203] 
	W1123 11:11:20.159048  703344 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 11:11:20.162021  703344 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-344709 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-344709" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 11:10:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-851396
contexts:
- context:
cluster: pause-851396
extensions:
- extension:
last-update: Sun, 23 Nov 2025 11:10:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-851396
name: pause-851396
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-851396
user:
client-certificate: /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.crt
client-key: /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-344709

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-344709"

                                                
                                                
----------------------- debugLogs end: false-344709 [took: 3.467206904s] --------------------------------
helpers_test.go:175: Cleaning up "false-344709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-344709
--- PASS: TestNetworkPlugins/group/false (3.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.82468803s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-378086 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [879f29eb-c272-4f6c-b331-1495c2897434] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [879f29eb-c272-4f6c-b331-1495c2897434] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003500838s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-378086 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-378086 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-378086 --alsologtostderr -v=3: (12.02819093s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086: exit status 7 (87.228481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-378086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1123 11:14:21.929645  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:14:38.856857  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:14:43.221537  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-378086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.161563124s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-378086 -n old-k8s-version-378086
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-p96px" [2df082a1-1ad6-44e1-8263-c77434c26762] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003401712s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-p96px" [2df082a1-1ad6-44e1-8263-c77434c26762] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004007902s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-378086 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-378086 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m19.286483189s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 11:16:40.152827  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.415848679s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-258179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4f4d26d7-32a3-4ce1-b0ab-085f6459a353] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4f4d26d7-32a3-4ce1-b0ab-085f6459a353] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.038679238s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-258179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-258179 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-258179 --alsologtostderr -v=3: (12.134257076s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179: exit status 7 (82.051637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-258179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-258179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.887794538s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-258179 -n no-preload-258179
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-715679 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fbad8dcc-4eb1-420d-badc-d21b074bec9c] Pending
helpers_test.go:352: "busybox" [fbad8dcc-4eb1-420d-badc-d21b074bec9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fbad8dcc-4eb1-420d-badc-d21b074bec9c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003553165s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-715679 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-715679 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-715679 --alsologtostderr -v=3: (12.462431807s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679: exit status 7 (71.867628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-715679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-715679 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.41269119s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-715679 -n embed-certs-715679
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dccnq" [c5c96a56-1f3f-4c03-b9c1-334ead3f8de8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003955131s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dccnq" [c5c96a56-1f3f-4c03-b9c1-334ead3f8de8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004568935s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-258179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-258179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.816509992s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jz7sf" [fb26cb95-b5c7-4522-be95-da876e2c603d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004733618s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jz7sf" [fb26cb95-b5c7-4522-be95-da876e2c603d] Running
E1123 11:18:49.909325  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:49.915941  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:49.927279  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:49.948647  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:49.989987  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:50.071383  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:50.232959  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:50.554402  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:18:51.196060  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003557376s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-715679 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-715679 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 11:19:10.403518  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:19:30.884822  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:19:38.856456  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (37.424904309s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-058071 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-058071 --alsologtostderr -v=3: (1.348384778s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071: exit status 7 (74.529477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-058071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-058071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (16.648079365s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-058071 -n newest-cni-058071
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-103096 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [132528ae-9172-48d0-89be-41e905f4ee49] Pending
helpers_test.go:352: "busybox" [132528ae-9172-48d0-89be-41e905f4ee49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [132528ae-9172-48d0-89be-41e905f4ee49] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003397081s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-103096 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-103096 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-103096 --alsologtostderr -v=3: (14.846858187s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-058071 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1123 11:20:11.846602  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m22.907294464s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096: exit status 7 (105.65529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-103096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-103096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m5.3694316s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103096 -n default-k8s-diff-port-103096
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7s8z9" [e36779bb-5521-45b7-9d2f-74bc1b446af9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004342354s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7s8z9" [e36779bb-5521-45b7-9d2f-74bc1b446af9] Running
E1123 11:21:33.768849  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003302273s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-103096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
I1123 11:21:34.253638  541900 config.go:182] Loaded profile config "auto-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-344709 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-344709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-px8cf" [078e2dac-ed0e-46f5-a0a2-e101a9a655bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-px8cf" [078e2dac-ed0e-46f5-a0a2-e101a9a655bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004351004s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103096 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m29.131940858s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-344709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1123 11:22:11.509158  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:22:31.991199  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.17941229s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xhxlc" [ed6cad09-e67f-4a45-85ce-bf75e27e2f17] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-xhxlc" [ed6cad09-e67f-4a45-85ce-bf75e27e2f17] Running
E1123 11:23:12.952633  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003648927s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lxkmq" [8ad01100-ce82-4890-afff-f74aae11cf0c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003988938s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-344709 "pgrep -a kubelet"
I1123 11:23:16.603257  541900 config.go:182] Loaded profile config "calico-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-344709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6n78f" [2f1bf7a9-ca73-4cd9-935c-208af9be9f9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6n78f" [2f1bf7a9-ca73-4cd9-935c-208af9be9f9e] Running
I1123 11:23:20.914989  541900 config.go:182] Loaded profile config "kindnet-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00313058s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-344709 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-344709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8sf9z" [709f2982-c710-47c4-9968-c7bc537a06c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8sf9z" [709f2982-c710-47c4-9968-c7bc537a06c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003859726s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-344709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-344709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.797567001s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1123 11:24:17.610176  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/old-k8s-version-378086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:34.874862  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:38.856612  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/addons-832672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:48.846116  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:48.852591  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:48.864175  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:48.885719  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:48.927088  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:49.010427  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:49.171983  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:49.493998  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:50.136039  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:51.417554  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:53.979902  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:24:59.101769  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m16.360993876s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-344709 "pgrep -a kubelet"
I1123 11:25:00.622754  541900 config.go:182] Loaded profile config "custom-flannel-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-344709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qnglm" [e26830ea-4a5a-4290-a212-17329e4f2b75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qnglm" [e26830ea-4a5a-4290-a212-17329e4f2b75] Running
E1123 11:25:09.343810  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003395077s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-344709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-344709 "pgrep -a kubelet"
I1123 11:25:14.535234  541900 config.go:182] Loaded profile config "enable-default-cni-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-344709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mnkcr" [52968657-e498-4bbe-af13-2c97f92c07cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mnkcr" [52968657-e498-4bbe-af13-2c97f92c07cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004661825s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-344709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m4.775676164s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1123 11:26:10.787399  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/default-k8s-diff-port-103096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:34.589334  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:34.595700  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:34.607138  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:34.628477  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:34.669870  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:34.751378  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:34.913082  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:35.234767  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:35.876316  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:37.157718  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-344709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (48.414355399s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-344709 "pgrep -a kubelet"
I1123 11:26:38.768190  541900 config.go:182] Loaded profile config "bridge-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-344709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rqtv2" [ce866ea8-2b99-454d-a11f-c8f9197f5309] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 11:26:39.719088  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:40.153763  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/functional-336858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rqtv2" [ce866ea8-2b99-454d-a11f-c8f9197f5309] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00399513s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-4rxnb" [137dd5dc-0633-48a9-86ec-4db7ec807598] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004026315s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-344709 "pgrep -a kubelet"
E1123 11:26:44.841502  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1123 11:26:45.260563  541900 config.go:182] Loaded profile config "flannel-344709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-344709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pjfz2" [63f96a82-0315-4f68-bc5d-6139fa0e4dea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pjfz2" [63f96a82-0315-4f68-bc5d-6139fa0e4dea] Running
E1123 11:26:51.013673  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/no-preload-258179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:26:55.083156  541900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/auto-344709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004181541s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-344709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-344709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-344709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-549884 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-549884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-549884
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-546564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-546564
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-344709 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-344709" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 11:10:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-851396
contexts:
- context:
cluster: pause-851396
extensions:
- extension:
last-update: Sun, 23 Nov 2025 11:10:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-851396
name: pause-851396
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-851396
user:
client-certificate: /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.crt
client-key: /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-344709

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-344709"

                                                
                                                
----------------------- debugLogs end: kubenet-344709 [took: 3.81582937s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-344709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-344709
--- SKIP: TestNetworkPlugins/group/kubenet (3.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-344709 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-344709" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-540037/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 11:11:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-851396
contexts:
- context:
cluster: pause-851396
extensions:
- extension:
last-update: Sun, 23 Nov 2025 11:11:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-851396
name: pause-851396
current-context: pause-851396
kind: Config
preferences: {}
users:
- name: pause-851396
user:
client-certificate: /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.crt
client-key: /home/jenkins/minikube-integration/21968-540037/.minikube/profiles/pause-851396/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-344709

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-344709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344709"

                                                
                                                
----------------------- debugLogs end: cilium-344709 [took: 5.766751088s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-344709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-344709
--- SKIP: TestNetworkPlugins/group/cilium (6.01s)

                                                
                                    
Copied to clipboard